The terms “local language model” and “OpenAI” refer to different aspects of language models, so let’s clarify their meanings:
- Local Language Model: This term typically refers to a language model that operates on a local or on-premises system, meaning it runs on your own computer or server, rather than relying on external cloud-based services. Local language models can be fine-tuned and customized to suit specific needs or security requirements, and they might be used for various tasks, such as text generation, translation, or chatbots. They are often used when data privacy and control are paramount.
- OpenAI: OpenAI is an artificial intelligence research organization that has developed various language models, including GPT-3 (and possibly newer versions as of my last knowledge update in January 2022). OpenAI’s models are known for their general language understanding and text generation capabilities. OpenAI provides access to its models through APIs, allowing developers to integrate their models into applications and services.
So, the key difference is that “OpenAI” refers to the organization developing the language models, while “local language model” refers to where and how the model is deployed and used. You can use an OpenAI model both locally and through cloud-based services, depending on your needs and preferences.
Most public language models are trained on vast amounts of data, which is “scraped” from all the data that is fed into the model.
Every time you send something to OpenAI’s ChatGPT, for instance, OpenAI uses your data to “train” its model. The intent is to provide a broader knowledge base, which may be more useful for most common interactions with ChatGPT.
For some uses, a more specific language model may be needed, which is trained on a narrower set of data. E.g., a law office may choose to train its own language model on the kind of data it primarily works with. This will improve the responses that come from the language model, since it reduces the risk of getting unrelated information. Yes, even language models can get confused, or don’t understand the specific context of a question. Training your own language model can help.
A Large Language Model (LLM) is a powerful type of artificial intelligence system designed to understand and generate human language. It’s made up of a vast number of interconnected virtual “neurons” that can process and generate text. LLMs like GPT-3, for example, can handle a wide range of natural language tasks, such as answering questions, translating languages, writing articles, and even simulating human-like conversations. These models are trained on massive datasets, allowing them to learn patterns and nuances in language, making them valuable tools for various applications in fields like natural language processing, machine learning, and text generation.
AI brains are self-contained bodies of information. They can be used to provide context to Large Language Models (LLMs) and answer questions on a particular topic.
LLMs are trained on a large variety of data. However, to answer a question on a specific topic, or to draw conclusions from a specific topic, they need to be supplied with the context of that topic.
AI brains are an intuitive way to provide that context.
Selecting a brain provides the context of that brain to the LLM. This lets users build brains for specific topics, and then use them to answer questions about that topic, without being “polluted” by information that is not in the brain. This prevents “hallucinations” and answers that are out-of-context.
Large language models (LLMs) are trained on a lot of random information. This makes them great at generating general content. However, very specific information in LLMs may be out-of-date or not exist at all.
Retrieval-augmented generation (RAG) fills in this gap. Instead of trying to piece together a response based on all the information the LLM was trained with, the LLM can now “ask” a specific dataset that knows the up-to-date and topical information.
A WildcatGPT AI agent, or “brain” is such a dataset. You feed it all the specific information you are interested in. The LLM can then find the information in the RAG data, and return meaningful answers from it. The responses will be as up-to-date as the data you have fed the brain, and it can provide the sources, in order to support the validity of the responses.
Please check your Spam or Junk mail folder. If the email isn’t there, try signing up again and ensure that you enter the correct email.
If you entered the correct email and did not receive a magic link, please contact us.
Privacy
Transparency
Security
ESG:
Environmental,
Social: no outsourcing, off-shoring fair labor practices, no middle man agency to add to cost
Governance
- Go to Wildcat website and click on the “Sign up” link at the top right of the browser window.
- Enter your email address and click “Continue with email”. This will trigger an invitation “Magic link” email to the address you entered.
- Click on the magic link to be logged in to the site.
- If you do not see a magic link email in your Inbox, check your Spam / Junk mail folder.