If you are in recovery mode on Fedora, add –no-dbus right after the snapper command. e.g.
snapper --no-dbus list
You can use the diff command to list the changes that happened between snapshots.
snapper --no-dbus diff 108..109
And to undo a change or all the changes between a snapshot, do the following. Where 108..109 are all the changes you want to remove. So essentially going back to snapshot 108.
For some reason a lot of applications out there do not have a built RPM package. Fortunately, there are a bunch of applications built into snap, so we can install snap and then install Discord.
You can also use the copr repo. Visit the following link for instructions.
By default Linux and OLED displays don’t really want to play well together. icc-brightness is a handy utility that resolves the problem, but all the instructions I found online were for Ubuntu/Debian based distributions.
We now need a user to connect to the Samba share with. You can use the commands below to to create a new user.
pdbedit only configures a current Linux system user for Samba. You can skip creating a new Linux user, but only if there is one already created that you can use.
You can now test to see if the share works. Open up Windows Explorer. Type in the IP address of the server and connect.
\\ip-address\sambaUser
It should prompt you for a login. Enter the user and password you set up.
Connecting to Fedora Samba/CIFS server
If it loads, then congratulations! You have successfully setup a Samba/CIFS Share on Fedora Server. Create new directories or files or whatever else you need.
Successfully Connected to Fedora Samba/CIFS Server
Check out the following links for more information about setting up Samba.
There are a few different ways to view RAID information on Fedora. Here are two commands that can help.
1. Print Mdadm config
You can copy and past the following command to print the mdadm configuration.
cat /etc/mdadm.conf
It should return something similar to the following.
$ cat /etc/mdadm.conf
# mdadm.conf written out by anacondaMAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/Boot level=raid0 num-devices=6 UUID=21ce258a:015d0dd4:90d5b80e:ab04b7f7
ARRAY /dev/md/Root level=raid0 num-devices=6 UUID=4be32ad0:f3aa77bd:139d749d:4a6aab60
We see from the above output that we have two raid arrays. Both RAID 0 over 6 drives.
2. Print mdstats
You can show the mdstats by running
cat /proc/mdstat
Should get output similar to the following.
$ cat /proc/mdstat
Personalities : [raid0]
md126 : active raid0 sdc2[0] sdf2[5] sde2[4] sdd2[1] sda2[2] sdb2[3]
5856552960 blocks super 1.2 512k chunks
md127 : active raid0 sdc1[0] sdf1[5] sde1[4] sdd1[1] sdb1[3] sda1[2]
3133440 blocks super 1.2 512k chunks
unused devices:
This shows us the RAID size. About 5TB on one and 3GB on the other. The 3GB is used for the boot partition.
Other Notes
Apparently there is a difference between “mdadm” and “dm-raid” Mdadm is for managing and creating software raids, while dm-raid interacts if a device like a laptop has a “fake RAID”