The software stack orchestrates the entire AI interaction, from capturing your thoughts to delivering AI responses.
Orange Pi Zero 2W: The Client Brain
Running Armbian OS, the Orange Pi hosts a Python script that monitors the middle mouse button. Upon activation, it communicates with the PC Companion App to initiate text capture and, subsequently, sends the full prompt to the Ollama server.
Python Client Script (Excerpt):
# GPIO Setup (Conceptual)
import gpiod
# ...
chip = gpiod.Chip('gpiochip0')
ai_trigger_line = chip.get_line(17) # Your specific GPIO pin
ai_trigger_line.request(consumer="ai_mouse_button", type=gpiod.LINE_REQ_DIR_IN, flags=gpiod.LINE_REQ_FLAG_BIAS_PULL_UP)
# ...
# Ollama API Call (from Orange Pi)
import requests
# ...
response = requests.post(OLLAMA_API_URL, headers=headers, json=payload, timeout=300)
response.raise_for_status()
llm_response = response.json().get("response", "").strip()
PC Companion App: The Input Interceptor
Developed in Python, this application runs on your main PC. It features a custom keylogger (using `pynput`) that captures individual keypresses, bypassing traditional copy/paste restrictions. It acts as a bridge, sending the buffered text to the Orange Pi when commanded by the mouse button.
PC Companion App (Keylogger Excerpt):
import pynput.keyboard
# ...
typed_buffer = []
# ...
def on_press(key):
global typed_buffer
if recording_active: # Activated by Orange Pi command
try:
if hasattr(key, 'char') and key.char is not None:
typed_buffer.append(key.char)
elif key == pynput.keyboard.Key.backspace:
if typed_buffer: typed_buffer.pop()
# ... handle other keys ...
except AttributeError: pass
# ...
listener = pynput.keyboard.Listener(on_press=on_press)
listener.start()
# ... TCP/IP communication to Orange Pi
LattePanda Sigma: The AI Server Hub
Running Ubuntu Server, the LattePanda hosts Ollama, providing a powerful and efficient environment for running the Meta-Llama-3-8B-Instruct model. Ollama handles the heavy lifting of LLM inference, responding to requests from the Orange Pi via its API.
Ollama Setup (Terminal Commands):
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Download Llama 3 model
ollama pull llama3
# Example API interaction (from Orange Pi)
# POST http://YOUR_LATTEPANDA_IP:11434/api/generate
# { "model": "llama3", "prompt": "Your captured text here" }