Server Version#: plexmediaserver-1.26.2.5797-5bd057d2b.x86_64.rpm
Player Version#: N/A
The RPM packaging of the plexmediaserver is doing a number of things in the rpm scripts, including a critical check in the preinstall to verify systemd is present. It is understood that systemd is necessary, and that’s not the issue.
As seen by running rpm -qp --script <pkg>, currently the preinstall script is doing:
# Make sure this system is systemd-based. If not, stop here and fail with error code 1.
if [ "$(cat /proc/1/comm)" != "systemd" ]; then
echo "Plex Media Server requires systemd. Please upgrade the Operating Sytem version"
exit 1
fi
[... snip ...]
What’s important to note is that this relies on the system the package is being installed on to be running so the /proc file system is mounted. In the case of the os-tree based distros (Fedora CoreOS, Fedora Silverblue, Fedora Kinoite, OpenSuse MicroOS, etc) that are becoming very popular for use on servers, the system being updated is not running and is isolated from the currently running system. As a result, this check will fail even when systemd is present. This is hamstringing users of these distros.
An easy alternative to this check would be to simply define an actual dependency on the systemd package. In the extremely unlikely situation someone is trying to install the plexmediaserver rpm onto a system that supports rpms but somehow doesn’t currently have systemd (I’m not aware of any distribution in the last 3 years that fits this), they would be prompted during the rpm installation for the list of dependencies that are about to be installed, including systemd. This is actually the recommended method according to Fedora, though there is also an option for a weak dependency that is verifiable as to whether or not it was met without forcing the systemd installation as a hard dependency.
Yet another alternative is to look for the presence of /usr/lib/systemd/. While there is a difference between systemd being installed and it being run, it’s an unimportant distinction in this case and it’s safe to assume that if systemd is installed it’s being used. If you truly wanted to be certain that systemd really was installed and something else hasn’t artificially created the path, you can run rpm -qf /usr/lib/systemd/ to verify the folder is owned by the systemd package specifically.
This topic was previously brought up, but the point was missed in the prior question when it was mistaken for a concern about docker. That topic got automatically closed and isn’t taking new responses.
I understand what you’re saying and think I understand what you’re trying to do.
It’s not possible to nest use of rpm from within rpm (lockbox asserted by rpm)
If you look at how Fedora installs itself from the bootable image –
Create partition table on new OS device.
Format partitions on new OS device
Mount device’s partitions based at /mnt/sysimage
Copy (extract) the required core OS files into /mnt/sysimage
Bind current live kernel directories into ‘sysimage’
– mount --bind /proc /mnt/sysimage/proc
– mount --bind /sys /mnt/sysimage/sys
– mount --bind /dev /mnt/sysimage/dev
Execute necessary commands on the new target in its namespace
– chroot /mnt/sysimage <command(s)-or-script(s)>
Umount the filesystems
At this point, the target device is ready to be booted.
As you can see, completing configuration of the target OS installation is dependent on the critical kernel directories being present.
Regarding the packaging
I have a completely rewritten RPM package script set.
That package, like its current DEB counterpart, supports live-system, container (LXC or Docker), and Custom installation models.
I had previously offered it here in the forums.
There was insufficient interest to pursue further.
Without sufficient interest, the effort was placed on indefinite hold.
That’s amazing work. I only recently installed a new server using one of the os-tree based distros so I haven’t had cause to encounter or investigate this issue before. I did find your prior post, but it was closed for discussion and I wasn’t aware it would solve this specific problem as well.
I’ll give it a shot on Fedora36 Silverblue, which should/would validate it for Fedora 36 Silverblue, Fedora 36 CoreOS, Fedora 36 Workstation, and Fedora 36 Server. Presumably it would also cover OpenSUSE Leap and OpenSUSE MicroOS that use almost identical models, but I’m not familiar enough with the variations to be certain about that statement. Is there anything else I can do to help that work become mainline?
EDIT: It appears the GDrive rpm file linked in that post is gone now, as was reported by one of those who replied at the time. Is there any hope this could be revived?
I don’t have an active RPM VM right now so can’t check this but this appears to be the most recent update I have for that new packaging.
Would you mind checking it?
If this is correct, you’ll find /tmp/plexinstaller.log will contain the flag Custom=0/1 in addition to the systemd, lxc, and docker flags.
The special handling for OpenSUSE is included in the installer.
I do not know how many variants this will support.
Plex’s official support was Centos 7, Fedora, Ubuntu, and Debian.
When Centos Stream went sideways, that largely fell by the wayside.
I rarely hear any feedback from any Fedora users.
I don’t know if that’s because they migrated to Ubuntu/Debian or Docker for the vastly superior hardware transcoding support
PS: I think this thread’s title might better be reworded. Isn’t the issue surrounding non-live situations? It works as expected when the installation is running as the active kernel. While non-standard, you could customize PMS to run in a chroot() environment of your own making
Understandable. I think the biggest hurdle with the community involvement is the highly technical nature of the change, it just doesn’t ping on anyone’s radar. I’ll see if I can drum up some traffic to help test.
Just so I’m clear, the expected user-level benefits to this change are:
Allows Native install on os-tree-based distros
Allows for Flatpak creation (theoretically)
Supports rpm-based distros in a PMS containers (currently only deb files work in containers)
I’m about to test the file and I’ll let you know for at least part of the case.
You specifically mention that there’s better transcoding support with methods other than Fedora Native, but I haven’t seen anything that would actually suggest that. Do you have insight you can pass on on that topic?
From my experience, the Docker method is an incredibly brittle nightmare since it relies not only on the NVIDIA drivers like a native installation does, but also on an custom NVIDIA container run-time that’s independently updated, and on a rapidly changing set of container configuration options to set it to appear properly inside the container. Everything I’m seeing strongly suggests a native install on dedicated hardware is a far better solution if it’s available.
For Ubuntu/Debian comparison I haven’t heard much. If anything, the NVIDIA drivers seem to be better supported on Fedora in the last year and a half than on Ubuntu/Debian in general. Do you see something different?
Hardware tone mapping support for KabyLake (-7xxx) and above CPUs
Hardware transcoding support for CometLake (-10xxx) and above CPUs
Hardware transcoding for Intel -2xxx → -9xxx CPUs is included with PMS.
RPM distros :
Hardware transcoding for Intel -2xxx → -9xxx CPUs is included with PMS.
(unless something has changed in the past year at the distro level)
Please be careful with the use of “Container”
Linux Containers (LXC)
is the virtualized OS content (full).
relies on the kernel (as do all container)
maps external directories/resources into the namespace as all containers do.
typically is only 200-300 MB of disk usage for the entire container
can be updated & added to like any full OS installation
installing an application like Plex works as a native package installation.
I use LXC over Docker because it’s a (paraphrasing) “ultra-lightweight VM without the overhead of a VM”. I don’t suffer the typical type 1/type 2 virtualization problems because I’m still using the native host kernel – just in a different kernel namespace.
Below are my LXCs. Each things its a peer on my LAN just as if it were a VM.
I wasn’t aware of those differences, that’s quite interesting. It does seem to be particular to Intel encoding, but still very useful to know.
You’re absolutely right, I forget about LXC often. The unfortunate nature of Snap’s antagonistic relationship with developers and LXD’s insistence on tight coupling with it usually make it a non-starter for most use cases it would be perfect for, but I should look back at regular LXC again.
I’m assuming you’re not actually building Snaps and uploading them to the hardcoded Snapstore Canonical maintains, so presumably you’re building your own LXC containers and running them manually. LXC/LXD don’t have anything equivalent to a Dockerfile, so I’m guessing you just have the usual mishmash of shell scripts. Do you have something you could share on GitHub (or wherever), especially in the realm of how you start said container images?
It’s not that the NVIDIA drivers themselves are better or worse on their own, it’s more about how updated they can be. Fedora has taken a much more proactive approach since the announced deprecation of CentOS (presumably RedHat resources were freed up) on getting up to date and staying up to date with the most recent NVIDIA drivers. Ubuntu/Debian have a more LTS-like approach and as a result NVIDIA updates can/are often held back to avoid incompatibilities with desktop software that isn’t being significantly updated. This means the faster-updated distro is able to more quickly adopt new NVIDIA driver updates that presumably would include efficiency gains as well as newer hardware support.
Pick your poison — a stable system which is a few package versions behind — or — a bleeding edge system which may not boot after the most recent updates are installed (you can’t tell me that’s never happened LOL)
LXC setup is like setting up any host except partitions are ‘host directory maps’.
You get the command line base OS by default.
LXD-dashboard does the bulk of the work through the GUI.
You can run lxc via the command line or use the GUI.
Once you start a container, it restarts with the host unless otherwise configured.
I tried the rpm, but it’s generating an error return code (1) from the post install scriptlet without any accompanying error message. Looking at the scriptlet, the most likely cause would be the initial check for /tmp/plexinstaller.log missing. Technically it could be an unexpected unhandled script error that caused termination because of the set -e, but that seems unlikely.
Examining the preinstall scriptlet, it still has the systemd check that doesn’t work for systems that install while not running, but since there’s no errors from the preinstall it must be detecting the os-tree distribution as a docker installation. Furthermore, immediately after the docker early exit is a check that confirms Fedora is on exactly version 26, which has been EOL since Jan 2019 and would be guaranteed to cause an error exit.
No output is ever generated from the preinstall, but that makes sense because it appears the set -e is is causing the calls to Output to fail out of the function immediately on the check:
if [ "$2" = "1" ]; then
Message="$Message (set in Preferences.xml)"
fi
when $2 isn’t defined.
From the assumption the tmp file is not present during the postinstall, I believe I saw something about the rpm installer not preserving /tmp anymore and a macro being available for a folder space that replaces it.
So the list of issues:
systemd is no longer required, but it’s checked properly for os-tree distros
docker container check is catching non-docker distros that weren’t detected as systemd
Output isn’t working properly because of set -e
/tmp/plexinstaller.log isn’t preserved into postinstall
If you’ve pulled the scripts out of the RPM, be advised – They MUST run as root.
%pre runs – It writes /tmp/plexinstaller.log. If %pre is successful - Nothing is printed. All output goes to /tmp/plexinstaller.log. The console gets a “beginning” & “completed” line.
If there are errors which block continuing, %pre prints a summary.
%post reads the contents of /tmp/plexinstaller.log under script control and makes actual changes.
(%pre = take inventory and validate the inventory ; %post = do the work)
If “$2” isn’t defined, you get NULL.
[chuck@lizum ~.2000]$ test() {
> echo Arg1=\"$1\"
> [ "$2" != "" ] && echo Arg2=\"$2\"
> }
[chuck@lizum ~.2001]$ test a b
Arg1="a"
Arg2="b"
[chuck@lizum ~.2002]$ test a
Arg1="a"
[chuck@lizum ~.2003]$
When I run the install, I’m getting absolutely no console output at all from either %pre or %post, but I do get an exit code 1 during %post which causes it to halt the install.
I’m not trying to run either of the scripts manually, I simply printed them and visually inspected them. The list of issues are based on guesswork from experience scripting and the little bit of info (or lack of it) produced by the attempted install of the rpm.
If I’m not mistaken, the set -e command causes an immediate exit of a script if either an undefined variable is accessed, or a non-zero exit code occurs and isn’t part of an if or while statement. I believe it’s also inherited by subshells once set.
Assuming that’s accurate, the set -e in the early part of the %pre and %post will get inherited by the subshells the Output function invokes when called, and when it tries to access the undefined variable $2 it will immediately exit. The way to avoid that is to verify $2 exists before using it, usually with a if [ $# -ge 2 ].
I suspect the error check in the %post that verifies the presence of the /tmp file is the cause of my exit code 1 error status, but that’s only an assumption based on the fact that the only other cases that exit with code 1 would have console output first and I don’t have any.
How did it get past the check in %pre that looks for the $Distribution and $Version tuple to match exactly (“fedora”, “27”) or a non-Fedora distribution?