← Back to blog

Published on Sat Oct 25 2025 14:30:00 GMT+0000 (Coordinated Universal Time) by cresencio

Setting Up Ollama with BrowserOS: Local AI Configuration Guide

This guide provides complete instructions for configuring Ollama to work with BrowserOS as a local AI provider. BrowserOS supports local model integration, allowing you to run AI models on your own hardware instead of relying on cloud-based services.

Prerequisites

Before beginning the setup process, ensure you have:

  • Ollama installed on your system
  • BrowserOS installed and running
  • Sufficient system resources for your chosen model
  • Command line access to your operating system

Installation Steps

Step 1: Install Ollama

Download and install Ollama from ollama.com.

For macOS and Linux:

curl -fsSL https://ollama.com/install.sh | sh

For Windows: Download the installer from the official website and run the executable.

Step 2: Download an AI Model

Select and download a model from the Ollama library. Common options include:

ollama pull llama3.1:8b

Alternative models:

  • mistral:7b
  • phi3:mini
  • codellama:7b

View all available models at ollama.com/library.

Step 3: Configure CORS Settings

BrowserOS requires CORS (Cross-Origin Resource Sharing) to be enabled on the Ollama server. Start Ollama with the appropriate environment variable:

For macOS and Linux:

OLLAMA_ORIGINS="*" ollama serve

For Windows PowerShell:

$env:OLLAMA_ORIGINS="*"; ollama serve

For Windows Command Prompt:

set OLLAMA_ORIGINS=* && ollama serve

This configuration allows BrowserOS to communicate with the Ollama API.

Step 4: Configure BrowserOS AI Settings

  1. Navigate to chrome://settings/browseros-ai in BrowserOS

  2. Select “Add Provider” or “Add Local Model”

  3. Enter the following configuration:

    • API Endpoint: http://localhost:11434
    • Model ID: Your chosen model (e.g., llama3.1:8b)
    • Provider Type: Ollama
  4. Save the configuration

Step 5: Verify Connection

Test the connection between BrowserOS and Ollama:

  1. Open the BrowserOS agent interface
  2. Select your configured Ollama model from the dropdown
  3. Send a test prompt to verify functionality

Configuration Options

Model Selection

To view installed models:

ollama list

To switch models in BrowserOS:

  1. Navigate to AI settings
  2. Select a different model from your installed options
  3. Apply the changes

Performance Configuration

Adjust context window size for memory management:

ollama run llama3.1:8b --context-length 2048

Port Configuration

If port 11434 is unavailable, configure Ollama to use a different port:

OLLAMA_HOST=0.0.0.0:11435 ollama serve

Update the API endpoint in BrowserOS settings accordingly.

Troubleshooting

Connection Failed

If BrowserOS cannot connect to Ollama:

  1. Verify Ollama is running: ps aux | grep ollama
  2. Check CORS configuration is set correctly
  3. Confirm the API endpoint matches in BrowserOS settings
  4. Ensure no firewall is blocking localhost connections

Model Not Found

If the model cannot be loaded:

  1. Verify the model is downloaded: ollama list
  2. Check the model ID matches exactly in BrowserOS settings
  3. Ensure sufficient system memory is available

Performance Issues

If model responses are slow:

  1. Check available system resources
  2. Consider using a smaller model
  3. Reduce context window size
  4. Enable GPU acceleration if available

Security Considerations

Local Network Access

The OLLAMA_ORIGINS="*" configuration allows all origins to access the Ollama API. For production environments or network exposure, specify exact origins:

OLLAMA_ORIGINS="chrome-extension://your-extension-id" ollama serve

Firewall Configuration

Ensure Ollama is only accessible from localhost unless network access is required. Configure your firewall to block external access to port 11434.

Model Management

Adding Additional Models

Download additional models for different use cases:

ollama pull mistral:7b
ollama pull codellama:13b

Configure each model separately in BrowserOS AI settings to switch between them as needed.

Removing Models

To free disk space, remove unused models:

ollama rm model-name:tag

System Requirements

Minimum Requirements

  • 8GB RAM
  • 10GB available disk space per model
  • Modern CPU with AVX support
  • 16GB+ RAM
  • SSD storage
  • GPU with CUDA support (NVIDIA) or Metal support (Apple Silicon)

Alternative Configuration: LM Studio

If you prefer a graphical interface for local model management, LM Studio provides similar functionality to Ollama with a GUI-based approach. LM Studio also integrates with BrowserOS using the OpenAI-compatible API endpoint.

Additional Resources

Summary

This configuration allows BrowserOS to utilize local AI models through Ollama, providing:

  • Local processing without cloud dependencies
  • Privacy-focused AI interactions
  • No API costs or rate limits
  • Offline functionality once models are downloaded

Follow the steps outlined above to complete the setup process and begin using local AI models with BrowserOS.

Written by cresencio

← Back to blog