SmartWrks: Difference between revisions

From TheNow
(Created page with "Our Cognitive workfow: The only irreproducible outside technology is the actual language model, and while we explore many, we've foucused our development on the Llama 3.1 Models Instruct by [https://www.llama.com/ Meta] ([https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF Git]). We run the base weights in the Ollama Platform ([https://github.com/ollama/ollama Git], [https://ollama.com/ Website]), Using inexpensive GPUs to run multiple "Agents"....")
 
No edit summary
Line 4: Line 4:
The only irreproducible outside technology is the actual language model, and while we explore many, we've foucused our development on the Llama 3.1 Models Instruct by [https://www.llama.com/ Meta] ([https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF Git]).
The only irreproducible outside technology is the actual language model, and while we explore many, we've foucused our development on the Llama 3.1 Models Instruct by [https://www.llama.com/ Meta] ([https://huggingface.co/lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF Git]).


We run the base weights in the Ollama Platform ([https://github.com/ollama/ollama Git], [https://ollama.com/ Website]), Using inexpensive GPUs to run multiple "Agents".
We run the base weights in the Ollama Platform ([https://github.com/ollama/ollama Git], [https://ollama.com/ Website]), Using inexpensive GPUs to run multiple "Agents". To interact and deploy multiple agents we use "LM Studio" ([https://lmstudio.ai/ Website])





Revision as of 12:23, 15 September 2024

Our Cognitive workfow:


The only irreproducible outside technology is the actual language model, and while we explore many, we've foucused our development on the Llama 3.1 Models Instruct by Meta (Git).

We run the base weights in the Ollama Platform (Git, Website), Using inexpensive GPUs to run multiple "Agents". To interact and deploy multiple agents we use "LM Studio" (Website)


We expand the Agent's cognitive capabilities by contextualing information in the form of text using Retrieval-Augmented Generation (RAG)


and documents and expand the LangChain and crewAI libraries