INSTALL PROGRESS
0%
Full Install Guide · March 2026

Going full
Pop!_OS

ASUS ROG Zephyrus M15 GU502LU

Ripping out Windows and building a proper Linux + AI development machine from scratch. NVIDIA drivers, Ollama, Open WebUI — the whole stack.

i7-10750H GTX 1660 Ti · 6GB VRAM 16GB DDR4 512GB NVMe Pop!_OS 22.04 LTS · NVIDIA ISO
1 Phase One

Before You Touch Anything

01 Download Pop!_OS 22.04 LTS — NVIDIA ISO

You must use the NVIDIA-specific ISO — it ships with the proprietary driver pre-integrated. The generic ISO will leave you fighting the nouveau driver on a gaming laptop, which is a miserable experience.

Direct download Go to pop.system76.com → click Download → select NVIDIA under the download options. The filename will look like pop-os_22.04_amd64_nvidia_XX.iso
File size The NVIDIA ISO is about 2.2GB. Verify the SHA256 checksum on the download page if you want to be sure the download isn't corrupted (optional but good habit).
02 Flash the USB drive

You need an 8GB+ USB drive. Use balenaEtcher (Windows/Mac) or Rufus (Windows only) to write the ISO. This will erase the USB drive completely.

  • Download balenaEtcher from etcher.balena.io — it's the easiest, most foolproof option
  • Open Etcher → Flash from file → select your Pop!_OS ISO
  • Select your USB drive (double-check you're not pointing at an external HDD)
  • Click Flash — takes 5–10 minutes
If using Rufus on Windows When Rufus asks about ISO mode vs DD mode, choose DD Image mode. ISO mode will create a non-bootable drive.
03 BIOS settings — the Zephyrus-specific gotchas

The GU502LU has a few BIOS quirks that will block the install if you don't address them first. Plug in power, then:

Boot into BIOS: power on → spam F2 (or DEL) immediately

  • Secure Boot → Disabled — required. Pop!_OS has its own signing but it's easier to just disable for the install
  • Fast Boot → Disabled — prevents the USB from being recognized on boot
  • Launch CSM → Disabled — keep UEFI mode on
  • Boot → Add boot option or set USB as first boot priority
Optimus / MUX Switch If your BIOS has an iGPU / dGPU mode setting, leave it on Optimus (hybrid). Pop!_OS handles this correctly — don't force dGPU only or you may get a blank screen on boot.

Save and exit BIOS (F10), then immediately plug in your USB and power on — you may need to hit F8 or ESC for the boot menu and select your USB.

Phase Two
2 Phase Two

The Install

04 Boot from USB and start the installer

When the Pop!_OS live environment loads you'll land on a desktop. The installer starts automatically or there's an icon to launch it. If the screen looks garbled or black, try adding nomodeset to the boot parameters:

  • At the GRUB boot menu (may be a brief screen), press E to edit
  • Find the line starting with linux, go to the end, add a space then type nomodeset
  • Press F10 to boot — this is a temporary measure just to get the installer running
Tip Most GU502LU users report the NVIDIA ISO boots cleanly without needing nomodeset. This is just a fallback.
05 Installer — language, keyboard, and clean install

The installer is clean and minimal. Walk through the screens:

  • Language → English
  • Keyboard layout → English (US)
  • Install type → Clean Install — this wipes the full 512GB NVMe and installs Pop!_OS. Since there's no data to save, this is exactly what you want.
  • Drive → select the NVMe (should show as ~512GB, likely nvme0n1)
Encryption prompt Pop!_OS will ask if you want full disk encryption. Totally optional — it adds a password prompt on every boot. For a secondary machine used for learning, skipping it is fine. If you want it, use a memorable passphrase.

Click Erase and Install → confirm. The install takes about 10–15 minutes. The machine will reboot — remove the USB when it asks.

06 First boot setup — user account and GNOME

After reboot you'll go through a short first-run wizard:

  • Create your user account and password — remember this, it's used for sudo
  • Full Name, username (keep it short and lowercase, no spaces — e.g. adam)
  • Skip online accounts for now

You'll land on the Pop!_OS GNOME desktop. It's a tiling-capable desktop with the Super key as the main shortcut. Take a minute to just click around — it feels close to macOS in some ways.

First key shortcut to know Press Super (Windows key) to open the launcher. This is your primary way to open apps, run commands, and switch workspaces.
Phase Three
3 Phase Three

Post-Install Essentials

07 Run full system update

Open a terminal (Super → type "terminal" → enter, or right-click desktop) and run the full update stack. Do this before anything else.

# Update package lists and upgrade everything
sudo apt update && sudo apt upgrade -y

# Also run Pop!_OS's own upgrade tool
sudo pop-upgrade release upgrade

This may take a few minutes. Reboot when done if it asks.

Reboot after updates, log back in — if the desktop loads cleanly you're good to continue.
08 Verify NVIDIA driver and GPU recognition

This is the big one. Run these two commands to confirm your GTX 1660 Ti is recognized and the driver is loaded correctly:

# Check driver version — should show 525+ or similar
nvidia-smi

# Confirm GPU details
lspci | grep -i nvidia

nvidia-smi should output a table showing your GTX 1660 Ti, the driver version, and CUDA version. If you see this, you're in great shape.

If nvidia-smi gives "command not found" Open the Pop!_OS Settings app → go to OS Upgrade & Recovery or run the NVIDIA driver installer from the terminal. The NVIDIA ISO should have handled this, but occasionally a manual nudge is needed.
# If needed — manual driver install
sudo apt install system76-driver-nvidia
09 Install essential tools and quality of life packages

A one-liner to grab the tools you'll use constantly:

sudo apt install -y \
  curl wget git htop neofetch \
  build-essential python3-pip \
  docker.io docker-compose \
  flatpak gnome-software-plugin-flatpak
# Add Flathub for app installs (like Obsidian)
flatpak remote-add --if-not-exists \
  flathub https://dl.flathub.org/repo/flathub.flatpakrepo

# Add yourself to the docker group (no sudo needed for docker)
sudo usermod -aG docker $USER

# Apply group change (or just reboot)
newgrp docker
Run neofetch to feel good about life Just type neofetch in the terminal. You'll get a satisfying system info display with the Pop!_OS logo. Required ritual.
Phase Four
4 Phase Four

Ollama — Your Local LLM Engine

10 Install Ollama

Ollama is a single command install. It sets itself up as a background service and handles CUDA GPU detection automatically.

# Install script handles everything — driver detection, service setup
curl -fsSL https://ollama.com/install.sh | sh

After install, verify the service is running:

sudo systemctl status ollama

# Should show: Active: active (running)
Ollama runs as a systemd service — it starts automatically on boot and listens on localhost:11434
11 Pull your first model and verify GPU offloading

Start with llama3.2:3b — it's small, snappy, and great for getting your first local AI conversation running. Then pull the 7B workhorse.

# Start small — 3B model, fast on your GPU
ollama pull llama3.2:3b

# The real workhorse — 7B quantized, fits in 6GB VRAM
ollama pull mistral:7b-instruct-q4_K_M

# Coding-focused model (great for learning Linux commands)
ollama pull qwen2.5-coder:7b

Chat directly in the terminal to verify it works:

ollama run llama3.2:3b

# Type a message, hit enter. Ctrl+D or /bye to exit.

To confirm your GPU is actually being used (not CPU-only), open a second terminal while the model is running:

# Watch GPU memory usage in real time
nvidia-smi -l 1

# You should see GPU memory usage jump when running a model
# llama3.2:3b ≈ 2GB VRAM | mistral:7b-q4 ≈ 4.5GB VRAM
Phase Five
5 Phase Five

Open WebUI — ChatGPT Vibes, Fully Local

12 Deploy Open WebUI with Docker

Open WebUI gives you a polished chat interface that connects to your local Ollama instance. It runs as a Docker container and auto-starts with the system.

docker run -d \
  --network=host \
  -v open-webui:/app/backend/data \
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Wait about 60 seconds for the container to initialize on first run, then open your browser:

Open WebUI URL Navigate to http://localhost:8080 — you'll be prompted to create an admin account. This is local-only, use any credentials you like.

Once logged in, you can select any model you've pulled with Ollama from the dropdown and start chatting. Your conversation history is saved locally to the Docker volume.

If the model dropdown shows your Ollama models, everything is wired up correctly. You now have a fully local, private AI chat interface.
13 Manage the Docker container going forward

The --restart always flag means Open WebUI starts automatically on every boot. Here are the commands you'll use to manage it:

# Check container status
docker ps

# Stop Open WebUI
docker stop open-webui

# Start it back up
docker start open-webui

# Update to latest version
docker pull ghcr.io/open-webui/open-webui:main
docker stop open-webui && docker rm open-webui
# Then re-run the docker run command from step 12
Phase Six — Bonus Round
6 Phase Six

Reconnect Your Stack

14 Access Open WebUI from other devices on your network

You can chat with your local models from any device on the same Wi-Fi network — your Mac, phone, tablet, whatever. First, the default Docker setup only listens on localhost. You need to recreate the container with the port exposed to all interfaces:

# Stop and remove the existing container
docker stop open-webui && docker rm open-webui

# Re-run with port mapped to all interfaces
docker run -d \
  -p 0.0.0.0:8080:8080 \
  -v open-webui:/app/backend/data \
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:main

Then open port 8080 through the firewall:

sudo ufw allow 8080

Ollama also needs to listen on all interfaces — by default it only accepts connections from localhost, which means the Docker container can't reach it. Fix this by editing the Ollama service config:

sudo systemctl edit ollama

This opens an editor. Add these two lines, then save and exit:

[Service]
Environment="OLLAMA_HOST=0.0.0.0"

Then reload and restart:

sudo systemctl daemon-reload
sudo systemctl restart ollama
docker restart open-webui
Without this step, models won't appear If you skip the OLLAMA_HOST fix, Open WebUI will connect but show no models in the dropdown. This is the fix.

Now find the laptop's local IP address:

hostname -I

# Grab the first IP — something like 192.168.1.XXX

On any other device on the same network, open a browser and go to:

http://192.168.1.XXX:8080
Bookmark it The IP may change if the laptop reconnects to Wi-Fi. If you want a stable address, assign a static IP or a DHCP reservation in your router settings.
If you see the Open WebUI login screen from your other device, you're in. Same credentials, same models, same conversation history — just running on the laptop's GPU.
15 Install Obsidian via Flatpak

Obsidian is available on Linux via Flatpak. Your vault lives wherever you sync it via Syncthing, so the moment Obsidian opens it you're back in your second brain.

flatpak install flathub md.obsidian.Obsidian

# Run it
flatpak run md.obsidian.Obsidian

Point it at your Syncthing vault folder and all your notes, templates, and plugins will be right where you left them. Install the Smart Connections plugin and point it at your local Ollama endpoint (http://localhost:11434) for AI-powered vault search.

16 Install Claude Desktop on Linux

Claude Desktop has a Linux build. Download it from claude.ai/download — grab the .deb package for Debian/Ubuntu-based systems (Pop!_OS is Ubuntu-based).

# Install the .deb package (adjust filename to match download)
sudo dpkg -i claude-desktop_*.deb

# If dependency errors, fix them with:
sudo apt --fix-broken install

Your MCP config carries over — the claude_desktop_config.json on Linux lives at:

~/.config/Claude/claude_desktop_config.json

# Open it to add your MCP servers
nano ~/.config/Claude/claude_desktop_config.json
MCP servers on Linux Re-add your Obsidian MCP server (mcp-obsidian) pointing at your vault path. If you used uvx for voice-mode, the setup is identical to your Mac — same uvx --python 3.11 voice-mode command.
Quick Reference
Reference

Commands You'll Use Every Day

Command What it does
sudo apt update && sudo apt upgrade -yUpdate the whole system
ollama listShow downloaded models
ollama run mistral:7b-instruct-q4_K_MChat with Mistral in terminal
ollama pull <model>Download a new model
nvidia-smiCheck GPU status and VRAM usage
docker psList running containers
docker logs open-webuiDebug Open WebUI if it misbehaves
systemctl status ollamaCheck Ollama service health
sudo systemctl restart ollamaRestart Ollama service
df -hCheck disk space
htopTask manager in the terminal
neofetchFeel good about your setup
💾
RAM upgrade reminder When prices drop, you're adding a single 32GB DDR4 3200MHz SO-DIMM (PC4-25600, non-ECC, unbuffered) in the open slot. Kingston FURY Impact KF432S20IB/32 or Crucial CT32G4SFD832A are your targets. Set a CamelCamelCamel alert on B07ZLC7VNH. That takes you to 40GB total and opens up comfortable 13B model territory.