Underkube
I wanted to simulate a RedFish BMC to be able to power on/off libvirt virtualmachines and attach ISOs as I do for baremetal hosts.
Entering sushy-tools πsushy-tools include a RedFish BMC emulator as sushy-emulator (see the code in the official repo).
Basically it can connect to the libvirt socket to perform the required actions exposing a RedFish API.
metal3-io/sushy-tools container image πTo easily consume it, the metal3 folks already have a container image ready for consumption at quay.
I wanted to have specific permissions on the /var/lib/libvirt/images folder to be able to write as my user. To do it, you can just use setfacl as:
$ sudo setfacl -m u:edu:rwx /var/lib/libvirt/images The issue is sometimes those permissions were reset to the default ones⦠but why? and most important⦠who?
auditd πTo find the culprit I used auditd to monitor changes in that particular folder as:
$ sudo auditctl -w /var/lib/libvirt/images -p a -k libvirt-images Then, performed a system update just in caseβ¦ and after a whileβ¦
I wanted to configure a VM to act as a router between two networks, providing DHCP and DNS services as well.
β β β β ββββββββ β β β β β ββββββββββββββ βββββ€ vm01 β βββ€ dhcprouter ββββ€ β β β ββββββββββββββ β ββββββββ β β β β ββββββββ β β β β β βββββ€ vm02 β β β β β β β ββββββββ β β public network private network public network is the regular libvirt network created by default (192.
I wanted to compile the hypershift binary but it requires golang 1.17 which is not included in Fedora 35, so I ended up doing this:
mkdir ./tmp/ && \ podman run -it -v ${PWD}/tmp:/var/tmp/hypershift-bin/:Z --rm docker.io/golang:1.17 sh -c \ 'git clone --depth 1 https://github.com/openshift/hypershift.git /var/tmp/hypershift/ && \ cd /var/tmp/hypershift && \ make hypershift && \ cp bin/hypershift /var/tmp/hypershift-bin/' && \ cp ${PWD}/tmp/hypershift ~/bin/ HTH
To be able to monitor hardware health, status and information on HP servers running RHEL, it is required to install the HPβs Service Pack for Proliant packages.
It seems the Management Component Pack is the same(agent software but for community distros, for enterprise, use SPP.
There is more info about those HP tools on the HP site
Basically you just need to add a yum/dnf repository, install the packages and start a service (actually the service is started as part of the RPM post-install, which is not a good practiceβ¦)
When deploying OpenShift IPI on baremetal, there is only so much you can tweak at installation time in terms of networking. Of course you can do changes after the installation, such as applying bonding configurations or vlan settings via machine configs⦠but what if you need those changes at installation time?
In my case, I have an OpenShift environment composed by physical servers where each of them have 4 NICs. 1 unplugged NIC, 1 NIC connected to the provisioning network and 2 NICs connected to the same switch and to the same baremetal subnet.
In this blog post Iβm trying to perform the integration of an external registry with an OpenShift environment.
The external registry can be any container registry, but in this case Iβve configured harbor to use certificates (self generated), the βlibraryβ repository in the harbor registry to be private (aka. require user/pass) and created an βeduβ user account with permissions on that βlibraryβ repository.
Harbor installation πPretty straightforward if following the docs, but for RHEL7:
Introduction πIβve been using Nextcloud for a few years as my personal βfile storage cloudβ. There are official container images and docker-compose files to be able to run it easily.
For quite a while, Iβve been using the nginx+redis+mariadb+cron docker-compose file as it has all the components to be able to run an βenterprise readyβ Nextcloud, even if Iβm only using it for personal use :)
In this blog post Iβm going to try to explain how do I moved from that docker-compose setup to a podman rootless and systemd one.
Running a rootless Nextcloud pod πInstead of running Nextcloud as independant containers, Iβve decided to leverage one of the multiple podman features which is being able to run multiple containers as a pod (like a kubernetes pod!)
The main benefit to me of doing so is they they use a single network namespace, meaning all the containers running in the same pod can reach each other using localhost and you only need to expose the web interface.
Nextcloud in container user IDs πThe nextcloud process running in the container runs as the www-data user which in fact is the user id 82:
$ podman exec -it nextcloud-app /bin/sh /var/www/html # ps auxww | grep php-fpm 1 root 0:10 php-fpm: master process (/usr/local/etc/php-fpm.conf) 74 www-data 0:16 php-fpm: pool www 75 www-data 0:15 php-fpm: pool www 76 www-data 0:07 php-fpm: pool www 84 root 0:00 grep php-fpm /var/www/html # grep www-data /etc/passwd www-data:x:82:82:Linux User,,,:/home/www-data:/sbin/nologin NFS and user IDs πNFS exports can be configured to have a forced uid/gid using the anonuid, anongid and all_squash parameters.