From the previus article we learned how to run our local modal. But how should we use them in our application.
Good question.
For that we have quite some nice libary. Did you know that the OpenAi libary doesn’t support only the openai api but we can use it for interacting with our local llm and the funny part is that it is quite simple
We change:
const openai = new OpenAI({
apiKey: 'super secret open ai key', // This is the default and can be omitted
});
to
```js
const oai = new OpenAI({
baseURL: "http://localhost:11434/v1", // this is the port on which the ollama server is running
apiKey: "ollama", // this field can have whatever value
});
Cool now that we have this set up let’s. Create a basic app which reads a text and answear our request related to the text.
// What is an Instructor
const client = Instructor({
client: oai,
mode: "FUNCTIONS",
});
// What is a role
const user = await client.chat.completions.create(
{
messages: [
{
role: "system",
content:
'Please use this text for anwsering any questions. "Jason Liu is 30 years old"',
},
{ role: "user", content: "How old is Jason next year" },
],
model: "mistral",
// response_model: { schema: UserSchema },
functions: [
{
name: "out",
// here we define our function to get the result from the agent in JSON form
description:
"This is the function that returns the result of the agent",
// we use zod and a zod-to-json-schema converter to define the JSON Schema very easily
parameters: zodToJsonSchema(UserSchema),
},
],
},
{
maxRetries: 3,
},
);
// No we are using instructor for better formating
https://github.com/marqo-ai/marqo/blob/mainline/examples/GPT-examples/article/article.md https://medium.com/@ingridwickstevens/langchain-chat-with-your-data-qdrant-ollama-openai-913020ec504b
llava create image
// https://www.youtube.com/watch?v=_TUvb6NtpGA
stable diffuscion
// https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
// Project anime girlfriend