Jellyfin server with GPU transcoding on Proxmox

The goal is to setup a Jellyfin server on my Proxmox machine. It will serve media content only to client devices on my home network; no plan to expose the server to the internet. The Jellyfin server needs to support GPU accelerated transcoding to provide maximum compatibility to the client devices.

Hardware

The hardware configuration of my Proxmox server:

  • CPU: 13th Gen Intel Core i7-13700T
  • Motherboard: Gigabyte Q670M D3H
  • RAM: G.SKILL Ripjaws S5 Series (Intel XMP 3.0) DDR5 RAM 64GB (2x32GB) 5600MT/s
  • GPU: NVIDIA T1000 8GB low profile
  • SSD 1 (for Proxmox system): Samsung SSD 980 PRO 500GB
  • SSD 2 (for LXC/VM drives): Samsung SSD 980 PRO with Heatsink 2TB
  • SSD 3 (for Jellyfin transcoding cache): Samsung SSD 840 EVO 500GB SATA III
  • Network Adapter: 10Gtek 10Gb PCI-E NIC Network Card, Dual SFP+ Port, with Intel 82599ES Controller
    The motherboard comes with one 1Gbe and one 2.5Gbe RJ45 ports, I'm using the 1Gbe port as dedicated Proxmox management port, the 2.5Gbe port for some internet facing workload. The dual SFP+ 10Gbe ports are for internal service workloads.
  • Chassis: RackChoice 2U Micro ATX Compact Rackmount 2 x 5.25 chassis
  • Power supply: EVGA 700 BR, 80+ Bronze 700W

Proxmox VE

At the time of writing, the server is running Proxmox VE 8.2.4, kernel version 6.8.8-4-pve.

Secure boot is enabled.

Guest OS

The Jellyfin server is running in a Ubuntu 22.04.4 LTS (kernel 6.5.0-45) VM. I assigned 4 CPUs, 8GB RAM, 60GB boot disk (on NVME SSD) and 256GB transcoding cache disk (on SATA III SSD) to the VM.

The reason to use a VM instead of LXC is because the media files are stored on a different device (a NAS server), I need to mount an NFS share for Jellyfin to access the media files. If use LXC, I might have to mess with the host OS (PVE) to mount that NFS share, but I don't want to do that. Running a VM provides better isolation from the host OS.

I don't plan to let the Jellyfin server to provide DLNA service (already have a DLNA server on NAS), so only the following incoming rules are enabled on the firewall:

  • HTTPS (TCP 443)
  • SSH (TCP 22)
  • Ping (ICMP echo request)

Jellyfin Server

The Jellyfin Server was installed using the repository method. After the installation, I switched the service's port from 8096 to 443 (HTTPS default port). By default, the OS won't allow Jellyfin to bind to "privileged ports" (lower than 1024) because it is not running as root. To change this, run the following command:

setcap 'cap_net_bind_service=+ep' /usr/lib/jellyfin/bin/jellyfin

I use Let's Encrypt to get a HTTPS certificate, then use the following command to generate the pfx format required by Jellyfin:

openssl pkcs12 -export -in fullchain.pem -inkey privkey.pem -out jellyfin.pfx

I then copied jellyfin.pfx to /var/lib/jellyfin/https folder, and gave ownership of the file to the jellyfin account:

chown jellyfin:jellyfin /var/lib/jellyfin/https/jellyfin.pfx

Then on the Jellyfin web UI, go to Advanced -> Networking -> HTTPS Settings to set the path to the pfx certificate file.

Hardware Transcoding

Originally I was using the UHD 770 iGPU for hardware transcoding. This involves enabling SR-IOV for the UHD 770 so it can be shared by Virtual Machines at the same time running on the PVE host. The instructions can be found in Derek Seaman's blog post: Proxmox VE 8.2: Windows 11 vGPU (VT-d) Passthrough with Intel Alder Lake. It worked well, until early 2024, the author of the SR-IOV enabled i915 driver stopped maintaining the driver and a new version of Linux kernel broke the compatibility.

I did some research, and decided to move to NVIDIA GPU. It will cost more but the plus side is now I can also use the CUDA cores to run AI workload for learning purposes.

Talking about cost, I definitely didn't want to be set back thousands of dollars to get an enterprise GPU for transcoding and learning AI. So after a bit more research, I settled on NVIDA T1000 8GB GPU. It is low profile form factor, perfectly fits in my 2U chassis. It has a pretty comprehensive support of video codecs (H.264, HEVC, VP9 etc., except AV1). It is not power hungry (50W max). I found a reasonably priced used one on eBay.

Once the T1000 was plugged in, I followed this NVIDIA vGPU Guide to enable vGPUs on PVE host and the Jellyfin VM.

The last step is to change Jellyfin settings to use Nvidia NVENC option for hardware accelerated transcoding.

To verify that everything is working correctly, played an HEVC Main 10 encoded 4K video on Jellyfin web UI:

  • In the Jellyfin "Playback Info" window, I see Transcoding framerate: 150fps
  • In Proxmox web UI, the Jellyfin VM's CPU utilization is around 15%, while the video is still playing. If Jellyfin is using software to transcode the 4K HDR video, the CPU utilization would be much higher.
  • Run command nvidia-smi in the Jellyfin VM while the video is playing, the output would indicate ffmpeg is using the GPU:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  GRID RTX6000-1Q                On  |   00000000:01:00.0 Off |                  N/A |
| N/A   N/A    P0             N/A /  N/A  |     710MiB /   1024MiB |     27%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A      3342      C   /usr/lib/jellyfin-ffmpeg/ffmpeg               710MiB |
+-----------------------------------------------------------------------------------------+

And that's it, I'm happy with my Jellyfin setup.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.