Getting WSL working, running AI

  • 2 min read
  • Tags: 
  • wsl
  • setup

Had some fun with WSL, but in the end got it working pretty well.

Installation

Easy enough. I did need to blow away what I had first, and remove all vestiges of docker from Windows as I wanted to keep my ai workflow entirely within WSL.

Installed vscode in windows, and added a few extensions.

nvme

This was strangely the toughest part.

I’d aimed to have the nvme drive not known by wondows at all. I wanted it to have one ext4 partition so I’d get what passes for native performance.

In the end I had to have the drive formated as ntfs, I then creaed a fixed size vdisk, made that sivible to WDL and used fdisk to create am ext4 partition. The strangely painful point seemed to be to do with the sector size in ntfs, I needed to explicitly set the to 4096 so the phicisal and logical sized matched. Until I did that, my mkfs.ext4 alwasys ended wth an i/o error.

nvidia

Worked far too easily, must be some sort of magic. nidia-smi showed as expected first time. Very nicely surprised.

docker and then ai

Rather than docker.io, wanted to use docker-ce, so installed the normal way.

Started basic ai containers and kicked the tires, got ollama, comfyui and open-webui all working together with models and outputs in /data. Used Comfy with SDXL and then flux to generate monk pictures - okay, so it takes a bit of time, but I’m getting results. I’m seeing how I need to make sure one tool releases the VRAM before starting another. I can add a nuode to ComfyUI to do that after each picture, I can add options to my docker compose files.

My poor fx6300 CPU was flat out some of the time, though. Stopping nonsense like onedrive and iCloud really helped of course.

I think this platform might actually work, I just have to be careful with working serially and keeping the system lean. This won’t be a racehorse, but it’ll pull a plough - and that’s enough for me!