Jeffrey Morgan df5fdd6647 proto -> ollama
2023-06-26 15:57:13 -04:00
2023-06-26 15:57:13 -04:00
2023-06-25 00:30:02 -04:00
2023-06-26 13:41:16 -04:00
2023-06-26 15:57:13 -04:00
2023-06-26 15:57:13 -04:00
2023-06-26 15:57:13 -04:00
2023-06-26 15:57:13 -04:00
2023-06-25 13:08:03 -04:00
2023-06-26 14:03:49 -04:00

ollama

🙊

Running

Install dependencies:

pip install -r requirements.txt

Put your model in models/ and run:

python3 ollama.py serve

To run the app:

cd desktop
npm install
npm start

Building

If using Apple silicon, you need a Python version that supports arm64:

wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-MacOSX-arm64.sh
bash Miniforge3-MacOSX-arm64.sh

Get the dependencies:

pip install -r requirements.txt

Then build a binary for your current platform:

python3 build.py

Building the app

cd desktop
npm run package

API

GET /models

Returns a list of available models

POST /generate

Generates completions as a series of JSON objects

model: string - The name of the model to use in the models folder. prompt: string - The prompt to use.

Description
Get up and running with Llama 3, Mistral, Gemma, and other large language models.
Readme MIT 327 MiB
Languages
Go 96.8%
Shell 0.9%
TypeScript 0.7%
PowerShell 0.6%
Inno Setup 0.3%
Other 0.6%