The Summary

I've had a handful of friends ask me what all goes into my media setup of Mattyflix so I've decided to take some time and air it all out. This is not meant to be a step-by-step guide on how to replicate it, but it will give you a look on all of the innerworkings so you know where to start looking.

The Hardware

At the top of the rack is the primary storage component of the whole setup: Two Synology DS420+ NAS devices. Each has been upgraded to the maximum 6GB of RAM and outfitted with 2 x 512GB M.2 SSDs for a read/write cache. The left NAS is loaded with 4 x 14TB Western Digital drives built in a SHR (Synology Hybrid RAID) array giving me a one drive fail tolerance and a total usable volume of 42 TB. The right NAS is identical, aside from the fact that it's only been loaded with 4 x 12TB WD HDDs, giving a total volume size of 36 TB. Each NAS has 2 x 1GbE LAN ports which allows me to run them both as a link aggregation group (LAG), for a total throughput of 2000 Mbps per NAS.

Next up in line is the Unifi Dream Machine Pro, the all-in-one networking console that handles the entire networking component. There's not much to say other than it handles all of my VLANs, firewall, port forwarding, Wireless APs, etc. I'm currently using a 2.5GbE SFP as my uplink to my modem, which also has a 2.5GbE port to guarantee that I can use the full capacity of my 1.2Gb download speed from Comcast.

Just below the UDM Pro is a 24-port PoE Unifi switch. This is what powers all of the APs in the house, along with a few PoE enabled switches. Nothing of note here but being that is exists within the Unifi environment, it allows for some nice, smart diagrams.

Jumping down the rack, I have three rack-mounted servers: a 1U Dell that hasn't been turned on since I got it, an HP ProLiant DL380 G6, and an HP ProLiant DL380p G8.

The HP ProLiant DL380 G6 is the box running everything for Mattyflix. It has 6 x 300GB 15k SAS drives configured in RAID 10 for a 6x read speed gain and a 3x write speed gain. I only get a usable volume size of 900GB but I don't exactly need a lot of storage here. It's been upgraded with 2 x Intel Xeon X5650 Processors, each with 6 cores and 12 total threads with a base frequency of 2.66GHz and a max turbo of 3.06GHz. This gives the box a total of 24 logical processors and a CPU capacity of 32GHz. It also has 48GB of RAM installed. Lastly, I've outfitted an NVIDIA Quadro M2000 GPU into the server and given hardware passthrough to the Plex VM to allow for hardware transcoding of the media. (More on this later.)

The Server

Okay, there's a lot going on here so we're going to break this down first vertically, then dig into the stack.

At the top of everything is VMWare ESXi 6.5 running on the HP ProLiant DL380 G6. This is a hypervisor that allows me to spin up and manage virtual machines on the server.

Sitting within ESXi are two virtual machines:

  • Mattyflix

    • OS: Ubuntu Server 18.04.6 LTS

    • CPUs: 6 vCPUS

    • RAM: 16GB

    • GPU: NVIDIA Quadro M2000

  • PhotonOS

    • OS: VMWare Photon OS

      • Photon OS is a lightweight Linux distribution that is built by VMWare specifically for running Docker containers in a virtualized environment.

    • CPUs: 4 vCPUS

    • RAM: 8GB

Mattyflix is a VM whose sole purpose is to host Plex Media Server, currently running as a service. The reason we have GPU pass-through enabled to this machine is for media transcoding. Transcoding is what allows Plex to serve up a 4K movie through my small upload bandwidth. A full 4K movie wouldn't stream super well so being able to convert that media on the fly to a lower resolution is key to supporting a large streaming base. That's where the GPU comes in. On top of that, I just have the two NAS devices sharing a media folder on each, then I mount those shared folders on the VM using CIFS in /etc/fstab.

PhotonOS is the other VM which is currently running every other application in Docker containers. This is a breakdown of the containers I am running to make Mattyflix a reality:

  • Portainer (2.11.0)

    • This basically hooks directly into the docker daemon and gives me a nice UI to both view and manage all of my docker containers, along with easily spin up new ones.

  • Watchtower

    • Watchtower's sole purpose is to gracefully stop and rebuild any docker container that is running an out-of-date image. This keeps all of my applications up-to-date.

  • Traefik (2.0.7)

    • This is my reverse proxy. Any time I want to expose another on-prem application to the public, all I need to do is add a few Traefik labels to the Docker container and Traefik will route the specified subdomain to the internal port I defined.

  • Tautulli (2.8.0)

    • This is an advanced stats and activity monitor application that runs alongside Plex. It allows me to collect in-depth data from the Plex server so I can keep an eye on performance and bandwidth, along with enabling some cool features such as automating a weekly newsletter to users and publishing Plex server statuses directly to a Twitter account (@WatchMattyflix)

  • SABnzbd (3.4.2)

    • This is a binary newsreader tool. In other words, this is the thing that does all of the downloading. Instead of torrenting from a P2P network, Usenet allows for direct downloads of media in binary formats to reduce the risk of your activities being compromised by your ISP. It also does not require the user to upload or seed the data. There are two components to this that I'll go into later.

  • Radarr (3.2.2.5080)

    • Radarr is an application that manages my movies library. It mainly keeps the movie files organized for Plex, but it is also capable of searching usenet indexers for media, then sending the download stub to SABnzbd for download and processing.

  • Sonarr (3.0.6.1342)

    • Radarr but for television series.

  • Overseerr (1.27.0)

    • This is an externally facing application that allows my Plex users to SSO authenticate and make both movie and series requests. Those requests are funneled directly to Radarr/Sonarr and then processed by SABnzbd.

  • Organizr (2.1.1140)

    • I'm not actually currently using this, but this is supposed to be a hub for all of the above applications so any user can login and navigate between all of the above apps without having to remember the specific URLs or authenticate into each one. I've been having troubles getting it to play nicely with Traefik.

From top to bottom, Plex, Portainer, Overseerr, Tautulli, Radarr, and Sonarr

The Process

Q: So how does it all work?

A: Black magic, powered by demons.

...

Joking, but only sort of. So earlier I mentioned how I use Usenet for procuring media. That part is comprised of two major components:

  1. Access to usenet

  2. Access to an indexer

Access to Usenet is not free. You need to pay for a Usenet service provider. This cost ranges based on what your provider gives you. Different variables are: retention time (how far back can your server read in the post history), connections (the amount of concurrent connections you can make to usenet i.e. download speed), bandwidth (how much you're allowed to "read"), and how many newsgroups you have access to.

Think of usenet as being rudimentary forums, where every forum post/thread is just raw binary for a file. SABnzbd is just "reading" these forum posts, then saving them and unpacking them. It's not just downloading a file called "Toy Story (1995) [1080p].mkv"

You also need an indexer, which is a service that basically maps out the different newsgroups and posts and lets you know which post correlates with the media you want to access. Think of an indexer as, say, The Pirate Bay. They don't actually host any content, they just give you the .torrent files which tell you how to GET the content. Indexers are sometimes free, but limit the amount of API calls you're allowed to use in a day. Paying for access can increase that limit, or simply remove it altogether. This cost is usually very small, around $10/yr.

So putting that all together, SAB is the usenet "reader". Sonarr/Radarr hook into the indexers and make requests for you.

So when a user makes a request for media in Overseerr, this is how it goes:

  1. The media is added to Sonarr/Radarr

  2. Sonarr/Radarr does a search of any/all indexers I have defined for that media

  3. If it finds a suitable result, it sends the stub to SABnzbd for acquirement

  4. SABnzbd procures the media, all the while, Sonarr/Radarr is watching the progress through the API

  5. Once the download is complete, the media is then picked up by Sonarr/Radarr and renamed and moved into the proper directory

  6. Once the media has been renamed and moved, Sonarr/Radarr calls Plex via API to do a refresh of any folders that have changed for new media

This whole process generally takes a few minutes, based on the size of the media.

The Extras

So all of that is nice, but if I want to add new movies to Radarr/Plex, I need to be constantly requesting them, right?

That's where the next level comes in. Radarr also supports RSS feeds and lists. The community have created a series of automations and lists that I've already subscribed to. You know how IMDB has a category for "Highly Anticipated Movies"? I just added that RSS to Radarr and any movie that pops up on that list gets added and grabbed as soon as it's available. I don't even have to think about it.

Notice how Radarr is already waiting for Avatar 2, 3, 4, and 5?

The Conclusion

So I'd been doing this on a small scale for years, using a Rasperry Pi as my server and a 5TB external hard drive on USB 2.0. It wasn't until the pandemic hit and Comcast removed their data caps that I decided to take the time to do this "the right way". Overall, the money I spend on usenet access, electricity, and upkeep is probably more than just paying for one or two streaming services. But there aren't just one or two anymore. I was perfectly content with paying for the convenience of having a legitimate setup but the costs have outweighed the convenience factor. I did all of this, not only to avoid paying for a dozen different streaming services each month, but to keep fresh with my knowledge about server administration, networking, proxies, Docker, Linux administration, and more. And I figured with all of the hard work I'd spent, I may as well let my friends and family benefit from it.