Ripping out Windows and building a proper Linux + AI development machine from scratch. NVIDIA drivers, Ollama, Open WebUI — the whole stack.
You must use the NVIDIA-specific ISO — it ships with the proprietary driver pre-integrated. The generic ISO will leave you fighting the nouveau driver on a gaming laptop, which is a miserable experience.
pop-os_22.04_amd64_nvidia_XX.iso
You need an 8GB+ USB drive. Use balenaEtcher (Windows/Mac) or Rufus (Windows only) to write the ISO. This will erase the USB drive completely.
The GU502LU has a few BIOS quirks that will block the install if you don't address them first. Plug in power, then:
Boot into BIOS: power on → spam F2 (or DEL) immediately
Save and exit BIOS (F10), then immediately plug in your USB and power on — you may need to hit F8 or ESC for the boot menu and select your USB.
When the Pop!_OS live environment loads you'll land on a desktop. The installer starts automatically or there's an icon to launch it. If the screen looks garbled or black, try adding nomodeset to the boot parameters:
linux, go to the end, add a space then type nomodesetThe installer is clean and minimal. Walk through the screens:
nvme0n1)Click Erase and Install → confirm. The install takes about 10–15 minutes. The machine will reboot — remove the USB when it asks.
After reboot you'll go through a short first-run wizard:
sudoadam)You'll land on the Pop!_OS GNOME desktop. It's a tiling-capable desktop with the Super key as the main shortcut. Take a minute to just click around — it feels close to macOS in some ways.
Open a terminal (Super → type "terminal" → enter, or right-click desktop) and run the full update stack. Do this before anything else.
# Update package lists and upgrade everything sudo apt update && sudo apt upgrade -y # Also run Pop!_OS's own upgrade tool sudo pop-upgrade release upgrade
This may take a few minutes. Reboot when done if it asks.
This is the big one. Run these two commands to confirm your GTX 1660 Ti is recognized and the driver is loaded correctly:
# Check driver version — should show 525+ or similar nvidia-smi # Confirm GPU details lspci | grep -i nvidia
nvidia-smi should output a table showing your GTX 1660 Ti, the driver version, and CUDA version. If you see this, you're in great shape.
# If needed — manual driver install sudo apt install system76-driver-nvidia
A one-liner to grab the tools you'll use constantly:
sudo apt install -y \ curl wget git htop neofetch \ build-essential python3-pip \ docker.io docker-compose \ flatpak gnome-software-plugin-flatpak
# Add Flathub for app installs (like Obsidian) flatpak remote-add --if-not-exists \ flathub https://dl.flathub.org/repo/flathub.flatpakrepo # Add yourself to the docker group (no sudo needed for docker) sudo usermod -aG docker $USER # Apply group change (or just reboot) newgrp docker
neofetch in the terminal. You'll get a satisfying system info display with the Pop!_OS logo. Required ritual.
Ollama is a single command install. It sets itself up as a background service and handles CUDA GPU detection automatically.
# Install script handles everything — driver detection, service setup curl -fsSL https://ollama.com/install.sh | sh
After install, verify the service is running:
sudo systemctl status ollama # Should show: Active: active (running)
localhost:11434Start with llama3.2:3b — it's small, snappy, and great for getting your first local AI conversation running. Then pull the 7B workhorse.
# Start small — 3B model, fast on your GPU ollama pull llama3.2:3b # The real workhorse — 7B quantized, fits in 6GB VRAM ollama pull mistral:7b-instruct-q4_K_M # Coding-focused model (great for learning Linux commands) ollama pull qwen2.5-coder:7b
Chat directly in the terminal to verify it works:
ollama run llama3.2:3b # Type a message, hit enter. Ctrl+D or /bye to exit.
To confirm your GPU is actually being used (not CPU-only), open a second terminal while the model is running:
# Watch GPU memory usage in real time nvidia-smi -l 1 # You should see GPU memory usage jump when running a model # llama3.2:3b ≈ 2GB VRAM | mistral:7b-q4 ≈ 4.5GB VRAM
Open WebUI gives you a polished chat interface that connects to your local Ollama instance. It runs as a Docker container and auto-starts with the system.
docker run -d \ --network=host \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main
Wait about 60 seconds for the container to initialize on first run, then open your browser:
Once logged in, you can select any model you've pulled with Ollama from the dropdown and start chatting. Your conversation history is saved locally to the Docker volume.
The --restart always flag means Open WebUI starts automatically on every boot. Here are the commands you'll use to manage it:
# Check container status docker ps # Stop Open WebUI docker stop open-webui # Start it back up docker start open-webui # Update to latest version docker pull ghcr.io/open-webui/open-webui:main docker stop open-webui && docker rm open-webui # Then re-run the docker run command from step 12
You can chat with your local models from any device on the same Wi-Fi network — your Mac, phone, tablet, whatever. First, the default Docker setup only listens on localhost. You need to recreate the container with the port exposed to all interfaces:
# Stop and remove the existing container docker stop open-webui && docker rm open-webui # Re-run with port mapped to all interfaces docker run -d \ -p 0.0.0.0:8080:8080 \ -v open-webui:/app/backend/data \ -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main
Then open port 8080 through the firewall:
sudo ufw allow 8080
Ollama also needs to listen on all interfaces — by default it only accepts connections from localhost, which means the Docker container can't reach it. Fix this by editing the Ollama service config:
sudo systemctl edit ollama
This opens an editor. Add these two lines, then save and exit:
[Service] Environment="OLLAMA_HOST=0.0.0.0"
Then reload and restart:
sudo systemctl daemon-reload sudo systemctl restart ollama docker restart open-webui
Now find the laptop's local IP address:
hostname -I # Grab the first IP — something like 192.168.1.XXX
On any other device on the same network, open a browser and go to:
http://192.168.1.XXX:8080
Obsidian is available on Linux via Flatpak. Your vault lives wherever you sync it via Syncthing, so the moment Obsidian opens it you're back in your second brain.
flatpak install flathub md.obsidian.Obsidian # Run it flatpak run md.obsidian.Obsidian
Point it at your Syncthing vault folder and all your notes, templates, and plugins will be right where you left them. Install the Smart Connections plugin and point it at your local Ollama endpoint (http://localhost:11434) for AI-powered vault search.
Claude Desktop has a Linux build. Download it from claude.ai/download — grab the .deb package for Debian/Ubuntu-based systems (Pop!_OS is Ubuntu-based).
# Install the .deb package (adjust filename to match download) sudo dpkg -i claude-desktop_*.deb # If dependency errors, fix them with: sudo apt --fix-broken install
Your MCP config carries over — the claude_desktop_config.json on Linux lives at:
~/.config/Claude/claude_desktop_config.json # Open it to add your MCP servers nano ~/.config/Claude/claude_desktop_config.json
mcp-obsidian) pointing at your vault path. If you used uvx for voice-mode, the setup is identical to your Mac — same uvx --python 3.11 voice-mode command.
| Command | What it does |
|---|---|
| sudo apt update && sudo apt upgrade -y | Update the whole system |
| ollama list | Show downloaded models |
| ollama run mistral:7b-instruct-q4_K_M | Chat with Mistral in terminal |
| ollama pull <model> | Download a new model |
| nvidia-smi | Check GPU status and VRAM usage |
| docker ps | List running containers |
| docker logs open-webui | Debug Open WebUI if it misbehaves |
| systemctl status ollama | Check Ollama service health |
| sudo systemctl restart ollama | Restart Ollama service |
| df -h | Check disk space |
| htop | Task manager in the terminal |
| neofetch | Feel good about your setup |