Sunday, May 29, 2011

Use xsel to copy text between CLI and GUI

Takeaway: The simple and versatile xsel utility bridges the gap between the Unix pipeline and the clipboard functionality of the X Window System.

The concept of a “clipboard” in common operating system usage is tied to graphical user interfaces (GUI), where a form of temporary storage makes it easier to duplicate or move text. Its use is pretty simple, though the details may vary between implementations. For instance, on MS Windows the usual way to copy text from a webpage to a text editor is to click and drag the mouse to highlight a block of text on the webpage and press Ctrl-C to copy, then click where you want to insert the text in your text editor and press Ctrl-V to paste. In the X Window System, meanwhile, a different approach is more common; click and drag the mouse to highlight the block of text in the browser that you want to copy, but do not click Ctrl-C, then use the mouse to point where you want to paste in the text editor and, without moving the mouse anywhere, click the middle mouse button.

In the world of the Unix command line interface (CLI), copying text around is generally much quicker and easier to automate. There is rarely any need to use the clipboard as a middle-man; simply use the Unix pipeline to stream text from one place to another. For instance, to copy the fifth paragraph from one text file and attach it to the end of another text file, one might use a command like the following:

head -n 5 foo.txt | tail -n 1 >> bar.txt

The head and tail commands can take the -n N option, where “N” is a number, to specify a number of lines of text that should be selected from either the beginning (head) or end (tail) of a file. By piping the output of head (in this case, the first five lines of text from a file) through tail (to select only the single last line of its input in this case), a specified chunk of text is essentially copied out of the middle of the file. The append redirect, >>, takes whatever is sent to it and attaches it to the end of the file whose name is specified to the right of the redirect.

For a more dramatic example, examine the case of copying the contents of two text files into a third, new file. To accomplish this in the graphical user interface, one would have to first open a text file, highlight its contents (using either Ctrl-A or a click-and-drag action with the mouse), and copy them, then open the new file and paste that text into it using Ctrl-V or a context menu. Next, open the second existing file, highlight, and copy — then paste that into the new file after the text you had already pasted there. This involves either having multiple files open at once or opening and closing files an awful lot along the way, and takes some time on the user’s part at every step.

To accomplish the same at the Unix shell, a simple command will suffice:

cat foo.txt bar.txt > baz.txt

The cat command was designed for the purpose of simply and easily concatenating two or more files’ contents together. By default, that information is sent to standard output, but a truncating redirect, >, takes that output as its input and writes it to a file — overwriting the contents of the file if it already exists, or creating the file from scratch if it does not already exist.

Ultimately, the process of copying and pasting is fairly simple to grasp whether using the Unix pipeline and CLI tools or using the standard, clipboard-based GUI approach of MS Windows and the X Window System. Given the computing environments in which most open source software users spend a lot of their time, both approaches are necessary parts of the toolsets at their disposal. There is one more case where yet another approach to copying text is needed, moving it between the CLI and the GUI.

Before continuing, a basic understanding of how clipboard-like functionality is handled in the X Window System is helpful. The X server supports an arbitrary number of selections, but the two most commonly used are the Primary Selection and Clipboard Selection. The Primary Selection is by default used to track currently selected text, while the Clipboard Selection is used as temporary storage when an application explicitly copies something to that selection.

The xsel tool, copyfree software available via the software management systems of most open source operating systems, is a simple utility meant to serve the need to copy text between the CLI environment and the GUI environment. It effectively acts as a pipeline-like interface between the CLI and the GUI clipboard. Anything that can be piped to a CLI utility or redirected to a file can be sent to the clipboard by way of an xsel command. For instance, to copy the contents of two files into the primary selection, this command suffices:

cat foo.txt bar.txt | xsel -i

The -i option, which simply directs xsel to read from standard input, is likely to be the most common way one would use the xsel utility, making it easy to load the contents of text files into the Primary Selection so they can be pasted into a GUI application with a middle click of the mouse. It acts like a truncating redirect, in that it replaces any current contents of the Primary Selection with whatever is piped to xsel -i. To cause xsel to behave more like an append redirect, use the -a option instead.

Those familiar with log monitoring with the tail utility will understand how xsel -f works. The manpage describes it thusly:

-f, --follow append to selection as standard input grows. Implies -i.

At the other end of clipboard functionality is getting data out of one of the system’s selections. To “paste” from the Primary Selection, actually writing the contents of it to standard output (thus allowing it to be sent through the Unix pipeline), you can use xsel -o. Thus, to write the contents of the Primary Selection to a new file: xsel -o > foo.txt

This allows the data to flow in the other direction — entered into the Primary Selection by selecting text in a GUI application and “pasting” into the Unix pipeline via the xsel utility.

The xsel utility also gives access to more than the Primary Selection. The Primary, Secondary, and Clipboard Selections can be accessed by use of the -p, -s, and -b options, respectively. The -p is generally unnecessary, though, because the Primary Selection is the default target of the xsel utility.

xsel offers other options, of varying levels of usefulness. The xsel tool as a whole, though, is of tremendous use for those who work extensively with both the command line and graphical user interfaces.

This article was written using the Vim editor, and formatted using Markdown syntax. A command line filter utility I wrote in Ruby translates the Markdown formatted text of the article to HTML formatting; its output is piped to xsel. The command looks something like this: muit filename.txt | xsel -i

Following that, I middle click to paste into the form used to submit the article for publication at TechRepublic. Every time I submit an article to TechRepublic (these days, generally fourteen times a month), I use xsel, in addition to my other uses of the tool.

Chad Perrin Chad Perrin is an IT consultant, developer, and freelance professional writer. He holds both Microsoft and CompTIA certifications and is a graduate of two IT industry trade schools.

How GNOME 3 is besting Ubuntu Unity

Takeaway: Jack Wallen was jonsing for GNOME 3 and discovered the best route to this new desktop was Fedora 15 beta. Can you image how surprised Jack was to find out that GNOME 3 blows away Ubuntu Unity? Read on to find out more.

In lieu of the release of Ubuntu 11.04 and the default Unity desktop, it seems GNOME 3 (aka GNOME Shell) has fallen out of the spotlight. Most GNOME-based distributions have been sticking with classic GNOME and, well, there’s Ubuntu. And since GNOME 3 doesn’t play well at all with Ubuntu 11.04, it’s in a bit of a situation. Unless you’re willing to give Fedora 15 beta a go.

I decided I needed to do just that, since I am ever so quickly becoming disillusioned by Unity. Believe me, I wanted to like Unity — and I did, at first. It seemed very slick, efficient, and just what the stale desktop needed. But then, after a few weeks of use, I realized there were many annoyances. It was time for something completely different, and that something was GNOME 3.

I’ve tried GNOME 3 plenty of times, even in its early stages, and I was impressed back then. The fastest route to GNOME 3 that I found was the beta release of Fedora 15. I knew, going into this, that I would be dealing with beta software and this (Fedora 15) marked the first release Fedora has offered with GNOME 3 as the default. But I was ready to dive in and see what GNOME 3 was looking like these days. Fortunately, I was not disappointed.

Now, I should say that this is not a review of Fedora 15. It is in beta and that would be unfair of me to review a piece of beta software. So, the point of this open-source entry is to really illustrate to you just what the Fedora developers are doing with the GNOME 3 package and how this desktop is coming along.

Figure A Figure A

One of the first drastic improvements I noticed (one that Unity should take serious advantage of when they finally migrate to the GNOME 3 libraries) is the launcher and pager. When the mouse moves up to the upper left-hand corner (or hitting Alt-F1), the launcher and pager appear (see Figure A). What’s immediately noticable is that the pager has undergone some serious reworking. Prior to this release, the pager simply displayed however many desktops were configured when the launcher was opened. Now the pager shows up mirroring the launcher where individual desktops can be selected. This image shows the pager expanded. By default, only the left edge of the pager will show. When you hover the mouse over that edge, the full pager will pull out and the specific desktop can be chosen.

This also brings to light one of the shortcomings of the current release. As it stands, there’s a profound lack of configuration tools for GNOME 3. In fact, many configuration options can only be handled through the GConf Editor or by editing .xml files. On the plus side, these configuration tools are in the works.

Outside of the lack of configuration, GNOME 3 is set up quite nicely — sleek and efficient — which are the same properties I offered up to Unity. Only this time, it’s real. GNOME 3 is not suffering from the odd crashes and annoying menu behaviors that were near deal breakers in Unity. There are also some other things that help GNOME 3 easily trump Unity.

Take, for instance, the GNOME 3 search. Built into GNOME 3 is a GNOME Do-like search/launch. You can call up the search window by opening the launcher (move the mouse to the upper left corner or hit Alt-F1), and then type a search string in the search field (see Figure B). If you want to search for something outside of your desktop, you can enter the search string and then click either Google or Wikipedia, and your default browser will open with the search results.

But don’t think these to major revisions to the GNOME 3 desktop are the sole reasons you should give it a go. One of the most impressive feats accomplished by the GNOME 3 developers is how incredibly fast and reliable the desktop has become. GNOME 3 blows away Ubuntu Unity (on similar hardware) in the speed and reliability categories. I have yet to have an issue with GNOME 3 (on a beta release), whereas I have encountered plenty of issues on Ubuntu Unity.

Will GNOME 3 overtake the desktop by storm, making everyone rush to have the latest iteration of that famous desktop? Probably not. But if you want a fresh take on the desktop, and you’ve found Unity to be less than… well, unifying, GNOME 3 should do the trick. It’s fast, it’s stable, it’s very well designed, and Fedora 15 is doing it serious justice.

I’d like to drop some props to the Fedora 15 team, as they’re doing an absolutely incredible job with this bleeding-edge Linux distribution. Fedora 15 and GNOME 3 is a serious win-win from my perspective. Give it a go, and you might find that you agree!

Jack Wallen A writer for over 12 years, Jack's primary focus is on the Linux operating system and its effects on the open source and non-open source communities.

Introduction to SELinux: Don't let complexity scare you off

Takeaway: Vincent Danen acknowledges that some of the complexity of SELinux is intimidating, but if you spend some time with it, the payoff is heightened security and better control of your system.

Most people who know Linux have at least heard about SELinux. SELinux, or Security-Enhanced Linux, was originally developed primarily by the NSA (U.S. National Security Agency), as an implementation of the Flask operating system security architecture. Flask implements MAC (Mandatory Access Control), a means of designating what processes have access to what resources (be they network ports, files, and so on). A lot of work has been done to make SELinux as easy to use as possible, although at first glance it does look hideously complex.

Since 2003, SELinux has been integrated into the mainline Linux kernel, and is fully supported in distributions such as Red Hat Enterprise Linux, Fedora, CentOS, Debian (disabled by default), Ubuntu, openSUSE, Hardened Gentoo, and others. On Red Hat Enterprise Linux and Fedora, SELinux is enabled at installation.

There is a lot of information out there on SELinux; a lot of it, if you look at it quickly, may scare you off of SELinux. Don’t feel bad, it did the same to me as well. When I was developing Annvix, I opted to use AppArmor instead of SELinux because it seemed like the easier of the two and offered nearly the same end-result functionality. If I had to make that choice again, today, I would choose SELinux, however.

From a security standpoint, distributions and Linux vendors today do a herculean job of keeping Linux operating systems safe. This is good news for anyone using Linux; however, the bad news is that a lot of this is reactive security. If there is a vulnerability in Apache, the vendor will typically backport a fix and release an update. However, there is a window of vulnerability between when the fix is released and when the problem is made public; there may be an even larger window of vulnerability if this is something that hasn’t necessarily been made public, but is already being exploited (the so-called 0-day flaws). SELinux and tools like it are designed to protect you from these flaws by offering proactive security mechanisms.

For instance, if there were a flaw in Apache that allowed an attacker to make it display arbitrary files, this could be used to display the contents of files like /etc/passwd. This in turn makes it easier for an attacker to brute force SSH accounts by knowing in advance the account names to target.

Typically, Apache doesn’t serve this content, so under normal circumstances (unless there is a flaw in a PHP application, etc.) this information would not be disclosed. If SELinux were installed and in enforcing mode, access to this file would be denied because the SELinux policy would prevent it (since these files are of the wrong type for Apache to access).

So while you may be running a vulnerable version of Apache, or a web application with a flaw in it, SELinux would prevent that disclosure because Apache has no rights to that file (of course, SELinux will happily allow Apache to display files that it does have the rights to access). In this instance, SELinux is a mitigation that would prevent a fairly serious confidentiality breach until such a time as a patched Apache could be installed. In a situation such as this, you really only have four choices:

Turn Apache off to prevent the flaw from being exploited (generally, this would be considered bad for business)Continue to run Apache in a vulnerable state and pray for a quick update from the vendorRoll your own update to fix itUse something like SELinux, AppArmor, or some other MAC system for the proactive system hardening and security features it provides.

Speaking of features, SELinux has a number of features that make it compelling: It allows applications to query the policy, so they aren’t fully hidden from view, unlike with some other MAC systems.It allows in-place policy changes; this means you can change the policy without having to reboot in order to activate the changes (I well remember having to do this with RSBAC).It has control over process initialization, inheritance, and program execution — this gives you the ability to write very flexible policies to suit your needs.It has control over file systems, directories, files, open file descriptors, as well as sockets (such as TCP, UDP, etc.), messaging interfaces, and network interfaces.

Using SELinux, you can fully tweak your system for any kind of operation, be it a web server, file server, print server, even a desktop system can be suitably secured with SELinux.

Of course, all of this comes at a price, and this is what puts most people off. SELinux is complex, and it becomes even more so when you have to write your own policies. Fortunately, distributions such as Red Hat Enterprise Linux and Fedora come with very comprehensive policies to cover a variety of situations. The policies are quite cookie-cutter, but if you want to change them to accommodate your own system (such as a web directory that exists in a place other than /var/www/html), you can easily do so.

Finally, SELinux offers three modes: enforcing, permissive, and disabled. Most distributions that support SELinux out-of-the-box will have it set to permissive mode first. This tells SELinux to log violations, so that you can use your system and use the reports to build a suitable policy. In permissive mode, a violation is logged, but the access is granted. In enforcing mode, a violation would be logged and the access denied. Disabled, of course, allows access but does not provide any information on what would have been denied. So before disabling SELinux, consider keeping it in permissive mode. You may get a number of alerts, particularly on a desktop system, but these can be used to tweak policy, reducing the number of alerts you will receive later.

Next time, I will look at some basics of SELinux and tweaking the policies, as well as identifying some of the tools that are used to monitor violations and modify policy.

Vincent Danen Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Saturday, May 28, 2011

Install and configure GNOME Do in Ubuntu Unity

Takeaway: GNOME Do has been one of the favorite search and launch tools for the GNOME desktop for quite some time. When Ubuntu Unity came around, it seemed like GNOME Do would be redundant and unusable. Both are untrue. GNOME Do still offers features users want. In this how to, Jack Wallen demonstrates how GNOME Do can be used in Unity.

GNOME Do is one of those tools that should be rolled into GNOME by default. This tool allows for the search and launch of applications or files. The new Ubuntu take on the desktop, Unity, has this feature baked in, but it’s not nearly as flexible as is in GNOME Do. Fortunately, the application will still work once you’ve transitioned from the standard GNOME to Ubuntu Unity.

Getting GNOME Do to work in Unity is not even remotely challenging. But if it’s installed without understanding how to get it to work properly, GNOME Do will simply not do. Here’s how to install and configure GNOME Do to work in the Ubuntu Unity desktop.

Installing and launching GNOME Do

The installation of GNOME Do is simple:

Open the Ubuntu Software CenterSearch for “gnome do” (no quotes)Click the Install buttonEnter your sudo passwordFigure A Figure A

After downloading, GNOME Do will install and be ready for use. But how do you use it? Tap the Super key, type gnome-do in the search area, and hit the Enter key to launch GNOME Do. Now, it gets a bit tricky. Under standard GNOME, the key combination to call forth GNOME Do is Super-Space, but the Super key is reserved for two very important functions in Unity. If Super is tapped once, it will call up the search dialog, and if it is pressed and held, the launcher icons will display numbers (see Figure A). The user can then press the number associated with the launcher to launch the application. Figure B Figure B

So, after you launch GNOME Do, instead of using it to search for an application or file, the very first thing that must be done is to re-configure it to use a different hot key combination so that it doesn’t use the Super key. To do this, follow these steps: Launch GNOME Do with the gnome-do commandClick on the drop-down arrow in the upper-right corner and select PreferencesClick on the Keyboard tab in the Preferences windowDouble-click on the Summon Do Shortcut, and when it displays “New Accelerator,” enter the new shortcut to be used by pressing the key combination (I have configured it to use Ctrl-Space — See Figure B)Click close

Using and configuring GNOME Do

As I mentioned earlier, GNOME Do is quite a bit more flexible than the standard Unity search. How? GNOME Do can be configured to work in conjunction with external applications, such as Google Calendars. To make GNOME Do aware of your Google Calendar, do the following:

Open the Gnome Do Preferences windowClick on the Plugins tabScroll down to the Google Calendar entryEnable the plugin by clicking the check boxClick the Configure buttonIn the Google Calendar Configuration window, enter the credentials for the calendar to be associated with GNOME DoClick ApplyClose the Preferences windowFigure C Figure C

When searching Google Calendar (or any of the Google plugins for GNOME Do), it will be necessary to previously authenticate with a Google account in the default browser or an error will occur. It is also possible, once authenticated against the Google Calendar account, to bring up the Google Calendar event add page with the help of GNOME Do. To do this: Open GNOME DoType “new event” (no quotes)Hit EnterWhen the Calendar icon appears in GNOME Do (see Figure C), hit Enter againThe default web browser will open to the Google Calendar Event Add page, which will allow you to add an event and save it

Appearance and other configurations

Naturally, GNOME Do needs to fit in with the scheme and style of the desktop. Fortunately, it is possible to theme GNOME Do. Bring up GNOME Do and click on the drop-down menu to gain access to the Preferences window. Once the Preferences window is open, click the Appearance tab where Do’s appearances can be configured. There are four themes to choose from, as well as a few other options that effect appearance.

There is one particular preference that will not work with Ubuntu Unity. In the General tab, you will see an option to Show Notification Icon. This is not compatible with the Unity panel, as third-party panel applets are not installable. One particular plugin will also no longer work — Twitter. The Twitter GNOME Do plugin still uses basic authentication, which Twitter dropped a long time ago. This has yet to be fixed.

Extended usage

If the results that GNOME Do pop up do not seem to be locating files and folders, it is because the directories have not be set up. GNOME Do has to be made aware of the directories it has available to search. To do this, open up the Preferences window, choose the Plugins tab, select Files and Folders, and click the Configuration button. When the new window opens, click the Add button and add the directories that GNOME Do must be made aware of for searching purposes. With the necessary folders added, the GNOME Do search results will be much more effective.

Let GNOME Do

I was very pleased to find out that GNOME Do could work in conjunction with Ubuntu Unity. GNOME Do is an incredibly powerful and handy tool that makes working on the desktop so much faster. You might be happy with the way Unity searches and launches applications and files; but if not, let GNOME Do it!

Jack Wallen A writer for over 12 years, Jack's primary focus is on the Linux operating system and its effects on the open source and non-open source communities.

Practical SELinux: Port contexts and handling access alerts

Takeaway: After introducing and recommending SELinux a couple of weeks ago, I followed up with some of the basics - learning about setting file contexts with the semanage tool. This time, I’ll show you how to set other contexts, and also look at how to handle reported access violations. Previously, we concentrated on file contexts; yet another [...]

After introducing and recommending SELinux a couple of weeks ago, I followed up with some of the basics - learning about setting file contexts with the semanage tool. This time, I’ll show you how to set other contexts, and also look at how to handle reported access violations.

Previously, we concentrated on file contexts; yet another common and useful type of context to set is one related to ports. Using Apache as an example again, we know that Apache typically listens to ports 80 and 443. Default policy allows for this. If we wanted to have Apache listen on port 888 as well, however, this would not be permitted.

For instance, if you have:

...

in your httpd.conf file and attempted to restart Apache, it would fail as follows: # service httpd restartStopping httpd:                                            [  OK  ]Starting httpd: (13)Permission denied: make_sock: could not bind to address [::]:888(13)Permission denied: make_sock: could not bind to address 0.0.0.0:888no listening sockets available, shutting downUnable to open logs                                                           [FAILED]

What happened? Looking in /var/log/messages is a great place to start: Apr  2 15:34:03 cerberus setroubleshoot: SELinux is preventing /usr/sbin/httpd from name_bind access on the tcp_socket port 888. For complete SELinux messages. run sealert -l b9797116-ceaa-4dc8-acbc-b2fdb1dd1cfd

This is fairly useful and gives the exact command to view the alert in detail. The information used to construct this is stored in /var/log/audit/audit.log, but using sealert to view it is much easier: # sealert -l b9797116-ceaa-4dc8-acbc-b2fdb1dd1cfdSELinux is preventing /usr/sbin/httpd from name_bind access on the tcp_socket port 888.*****  Plugin bind_ports (92.2 confidence) suggests  *************************If you want to allow /usr/sbin/httpd to bind to network port 888Then you need to modify the port type.Do# semanage port -a -t PORT_TYPE -p tcp 888    where PORT_TYPE is one of the following: ntop_port_t, http_cache_port_t, http_port_t.*****  Plugin catchall_boolean (7.83 confidence) suggests  *******************If you want to allow system to run with NISThen you must tell SELinux about this by enabling the 'allow_ypbind' boolean.Dosetsebool -P allow_ypbind 1*****  Plugin catchall (1.41 confidence) suggests  ***************************If you believe that httpd should be allowed name_bind access on the port 888 tcp_socket by default.Then you should report this as a bug.You can generate a local policy module to allow this access.Doallow this access for now by executing:# grep httpd /var/log/audit/audit.log | audit2allow -M mypol# semodule -i mypol.pp

The sealert tool offers a lot of information here, including suggestions on how to resolve the issue. Out of the three suggestions it provides, the first is the one we want. We need Apache to listen to this port. The third suggestion would work as well, but the first is the easiest of the bunch (provided the correct context is chosen). In this case, it isn’t to be used for caching, but serving up content, so the httpd_port_t type is the one to use: # semanage port -a -t http_port_t -p tcp 888# semanage port -l | grep http_porthttp_port_t                    tcp      888, 80, 443, 488, 8008, 8009, 8443pegasus_http_port_t            tcp      5988

Incidentally, semanage port -l works the same for ports as semanage fcontext -l works for file contexts. In the above, we can see that the http_port_t type is now applied to port 888, and Apache should start. When it does, you can verify it is listening to the port with netstat: # netstat -lpn --tcp | grep 888tcp        0      0 :::888                      :::*                        LISTEN      28463/httpd

If you make a mistake, it is easy enough to delete the type. Also, if you wanted to prevent a service from binding on a particular port, the semanage delete (-d) argument is used (the rest of the arguments are identical to the add command): # semanage port -d -t http_port_t -p tcp 888# semanage port -l | grep http_porthttp_port_t                    tcp      80, 443, 488, 8008, 8009, 8443pegasus_http_port_t            tcp      5988

The final tools to get a quick introduction are the getsebool and setsebool tools. SELinux has a number of boolean macros that allow or deny certain types of functionality. For instance, Apache has a mechanism to allow users to have their own personal web sites using the ~/public_html/ directory (which shows up as http://foo.com/~user/). By default, SELinux does not permit this type of functionality, as seen by the value of the httpd_enable_homedirs boolean: # getsebool httpd_enable_homedirshttpd_enable_homedirs --> off

This in itself isn’t very descriptive, so semanage can provide further information: # semanage boolean -l | grep httpd...httpd_enable_homedirs          -> off   Allow httpd to read home directories...

Incidentally, using semanage boolean -l is a great way to see what booleans can be set and what they are used for.

To enable allowing httpd to read home directories, we would use setsebool:

# setsebool httpd_enable_homedirs 1# getsebool httpd_enable_homedirshttpd_enable_homedirs --> on

This will persist only while the system is running. To save this boolean change to the SELinux policy files, you must use the persistent change (-P) option to setsebool: # setsebool -P httpd_enable_homedirs 1

There is, obviously, so much more to SELinux, but these are the basics. Knowing how to change boolean settings, how to change the ability to access certain ports, files, and directories, as well as how to obtain information on SELinux violations, should provide you with the confidence to give SELinux a try. It isn’t nearly as scary as I thought it was a few years ago, and using SELinux now, on supported and recent distributions, will provide you with tools that are better and easier to use than those provided a decade ago. If you have been running a system capable of using SELinux, but have had it running in Disabled mode, you owe it to yourself to give it a try in at least Permissive mode. For a little bit of time and effort, it could save you when the next 0-day flaw that is applicable to you comes around (and, honestly, they seem to be coming around a lot more often than they used to).

Vincent Danen Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Redundancy and flexibility with RAID1+LVM

Takeaway: Vincent Danen tells you how to get reliable and flexible storage options when you combine RAID with LVM, which allows you to resize partitions inside a physical volume easily.

Development of filesystems on Linux has come far in the last few years and there are a number of important advancements that, when combined, can make for extremely flexible storage options for a Linux desktop or server. RAID support in Linux has been around for a long time, and so has LVM (Logical Volume Management) support. But have you ever considered putting the two together?

Individually, each has their own strengths and weaknesses. Together, they provide very flexible storage opportunities. RAID support in Linux has a number of options. You can combine disks to create a single large disk (RAID0) — this puts data from a RAID volume across multiple disks allowing for really good performance. There is also mirroring (RAID1), which is great for redundancy; data written to the RAID array is written to both disks simultaneously so in the event that one volume in the array dies (such as by a hard drive failure), the array can continue in degraded mode (writing to the surviving disk(s)), without data loss. With RAID0, data loss is a potential problem because if one drive in the array dies, the data it stored dies with it. The RAID10 mode combines both mirroring (RAID1) and striping (RAID0) together to provide performance and redundancy, but requires at least four disks.

LVM, on the other hand, is a mechanism for easily managing partitions on various hard drives. It is similar in many respects to RAID0 or JBOD as it can create logical volumes that span multiple hard disks. The primary benefit to LVM is that it allows you to resize partitions inside a physical volume easily. If one partition needs more size, and another has room to spare, the partitions can easily be adjusted, non-destructively. Couple that with support in ext4 for growing and shrinking partitions on-the-fly and LVM becomes very attractive indeed. It also makes creating backups easy, thanks to its snapshot feature.

Combining the two together, using LVM on top of RAID1 for mirroring, provides flexibility and redundancy. For instance, suppose you had a system with two 1TB hard drives. Traditionally you might use the disks individually, but should one of them die, the data on that drive would be lost (subject to backup policies, of course). You could instead tie the two drives together using RAID1, giving you 1TB of usable space rather than 2TB, but if one drive dies, it’s an easy matter to replace it and let the RAID array re-sync. You could have partitions on the drive for /boot, /, /home, /var, and /srv but without LVM each would be static in size. If you found out later that /var was too small, you would have some serious time-consuming work ahead of you to adjust partitions in order to make room for it.

Instead, you could create two partitions on each drive: /boot (as one RAID1 array, md0) and another not mounted that would be the physical volume for an LVM (md1). It would look like this:

# fdisk -l /dev/sdaDisk /dev/sda: 1000.2 GB, 1000204886016 bytes255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk identifier: 0x000408c9   Device Boot      Start         End      Blocks   Id  System/dev/sda1   *        2048     2050047     1024000   fd  Linux raid autodetect/dev/sda2         2050048  1953523711   975736832   fd  Linux raid autodetect# cat /proc/mdstatPersonalities : [raid1]md1 : active raid1 sda2[0] sdb2[1]      975735676 blocks super 1.1 [2/2] [UU]      bitmap: 1/8 pages [4KB], 65536KB chunkmd0 : active raid1 sda1[0] sdb1[1]      1023988 blocks super 1.0 [2/2] [UU]

Here we have our two RAID arrays. The first RAID array (md0) is mounted as /boot. The second RAID array (md1) is the basis of our physical volume: # pvs  PV         VG          Fmt  Attr PSize   PFree  /dev/md1   vg_cerberus lvm2 a-   930.53g    0

If we had another RAID array (say, a third and fourth drive also setup as RAID1), we could create another physical volume for use in LVM. We could then add that physical volume to the volume group as well, combining the two arrays (or devices) into one volume group. In this case we only have one, but the volume group looks like this: # vgs  VG          #PV #LV #SN Attr   VSize   VFree  vg_cerberus   1   4   0 wz--n- 930.53g    0

This is the vg_cerberus volume group, which is built on one physical volume, and has four logical volumes inside of it: # lvs  LV   VG          Attr   LSize   Origin Snap%  Move Log Copy%  Convert  home vg_cerberus -wi-ao  97.66g  root vg_cerberus -wi-ao  29.31g  srv  vg_cerberus -wi-ao 801.59g  swap vg_cerberus -wi-ao   1.97g

These volume groups can be dynamically resized if the need arises, using the lvextend and lvreduce tools (as well as the resizefs command to resize ext4 file systems). The four logical volumes (or partitions) are called home (mounted as /home), root (/), srv (/srv), and swap. These partitions are mounted like any other partition, however they are mounted using their device-mapper device names: # mount | grep mapper/dev/mapper/vg_cerberus-root on / type ext4 (rw)/dev/mapper/vg_cerberus-home on /home type ext4 (rw)/dev/mapper/vg_cerberus-srv on /srv type ext4 (rw)

While all of this sounds complicated, and might be to construct it using the command line tools, it is easy to accomplish with GUI tools at installation if your distribution of choice provides it (I imagine most do). Fedora 14, for instance, made this setup a point-and-click affair during installation.

The thing to remember is that /boot cannot be on an LVM. Some distributions, particularly older ones, may not like /boot being on a RAID array either, so you may need to have /dev/sda1, for instance, mounted as /boot and /dev/sdb1 mounted as /boot2 (with a daily rsync to make sure the contents of the /boot partition are synced in case of hardware failure and a need to boot off the other drive).

Fedora 14 permits booting off of a RAID array, so it was easy to assign /dev/sda1 and /dev/sdb1 to the RAID1 array (/dev/md0). The second partitions on each disk (/dev/sda2 and /dev/sdb2) were the size of the rest of the disk, and assigned to /dev/md1. After that, the LVM was configured and /dev/md1 was assigned to the physical volume, and the logical volumes were then each defined. It took roughly five minutes to create the entire thing at install.

The end result is that if I need to give /home more space and /srv has some to spare, I can do it very easily without rebooting the system. If one drive should fail, it’s a simple matter to pull that drive out, replace it, and create a similar partition layout and re-add the two partitions to reconstruct the RAID array. Because the RAID array can run in degraded mode, the downtime is reduced to how quickly the drive can be physically replaced when the system is powered down (hot swappable drives would reduce this to almost nothing).

With this setup, I have the ultimate flexibility in how space is allocated on the system and can adjust it as required, knowing that if one drive fails, the data is protected.

And if it simply is impossible to meet my storage needs with the drives that are available, adding another pair of drives and adding that available storage to the volume group to extend the existing logical volumes is a piece of cake too.

Vincent Danen Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Practical SELinux for the beginner: Contexts and labels

Takeaway: Vincent Danen gets into some of the basics of working with SELinux. Learn how to work with contexts, which include ports, processes, files, and directories, and labels.

Last time, I introduced you to SELinux: what it is, what it can do, and really why you need it (or a system like it). It is especially important with reported (and fixed) security vulnerabilities on the rise, and each year brings more reports, and more updates for end-users to install. This data tells us that we are in greater need of proactive security measures now than we ever were before. And this is where software like SELinux fits in.

There is a lot to SELinux, and we’re only going to touch on SELinux contexts and labels. Suffice it to say, SELinux policies contain various rules that allow interaction between different contexts. Contexts are ports, processes, files, directories, and so on. Instead of getting overwhelmed with the technical concepts of SELinux, we’ll instead look at the practical side of using SELinux so that it doesn’t seem quite as daunting.

The first thing to do is determine what mode SELinux is running in:

$ getenforceEnforcing

The getenforce command tells us what mode SELinux is in. The possible modes are Enforcing, Permissive, or Disabled. Enforcing means that SELinux will report access violations and deny the attempt, Permissive tells SELinux to report the violation but allow it, and Disabled completely turns SELinux off.

Ideally, if you are unprepared to run SELinux in Enforcing mode, it should be in Permissive mode.

This can be set at boot by editing (on Red Hat Enterprise Linux and Fedora) the /etc/sysconfig/selinux file and setting the SELINUX option:

SELINUX=enforcing

This will ensure it persists across reboot. SELinux can transition from Enforcing to Permissive easily using the setenforce command. Providing setenforce with a “0? argument will put the system in Permissive mode, a “1? will set it to Enforcing. To transition to or from Disabled mode, you need to reboot after making the appropriate changes to the sysconfig file. Using setenforce can be a great way of troubleshooting problems; if the problem goes away after setting the system to Permissive mode then you know it is SELinux.

If not, then it is something else.

# setenforce 0# getenforcePermissive

To view the contexts that a process is running with, add the Z option to ps: # ps auxZ | grep httpdsystem_u:system_r:httpd_t:s0    apache   30544  0.0  0.0 305612  6688 ?        S    Mar30   0:00 /usr/sbin/httpd

This tells us that the httpd process is using the httpd_t type, the system_r role, and the system_u user. More often than not, it is the httpd_t type you would be interested in.

The Z option is also used with other commands, such as ls, cp, id, and others. For instance, to view your security context:

# id -Zunconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

Or to view the security context associated with a file: # ls -lZ /var/www/html/index.html-rw-r--r--. root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/index.html

Again, probably the most useful bit of information here is the type, in this case it is httpd_sys_content_t.

As an example of how to quickly change the policy, assume that you do not put your websites in /var/www/, but rather use /srv/www/foo.com/html/ for your site’s document root. You can configure Apache to use these directories as the DocumentRoot for various websites, but if you were to visit them, Apache would return an error because SELinux would disallow access. SELinux knows nothing about allowing Apache to these directories as of yet.

To determine what type a file needs to have in order for Apache to access it, the above ls command tells us that the type we want is probably httpd_sys_content_t; after all, Apache by default serves files from /var/www/html/. Another, probably better, way is to use the semanage tool.

This tool is used to show, and change, defined SELinux policy. We could also do:

# semanage fcontext -l | grep '/var/www'/var/www(/.*)?                                     all files          system_u:object_r:httpd_sys_content_t:s0

This command tells semanage to list all fcontext entries (file contexts), and we hand that output to grep to search for and display ‘/var/www’. For brevity, the whole output is not shown, but the above confirms that /var/www/* has the httpd_sys_content_t type. So what we need to do is tell SELinux to give this same type to /srv/www/foo.com/html/*, so that Apache can serve up those files.

This can be done by using semanage to add a new context:

# semanage fcontext -a -t httpd_sys_content_t '/srv/www(/.*)?'# semanage fcontext -l | grep '/srv/www'/srv/www(/.*)?                                     all files          system_u:object_r:httpd_sys_content_t:s0# restorecon -Rv /srv/www

The SELinux policies here use regular expressions, so the above tells semanage to add (-a) a new fcontext with the type (-t) httpd_sys_content_t, and targets /srv/www itself and any sub-directories and files. We use semanage to list the fcontexts and search for any ‘/srv/www’ entries, to verify it is in place, and then use restorecon to re-label and set the appropriate security context on the /srv/www directory and any sub-directories and files.

At this point, Apache will serve content from that directory if configured to do so, because Apache has the right to read httpd_sys_content_t files and /srv/www/ will now be labeled correctly.

The restorecon tool is used to set default contexts on files and directories, according to policy. You will become very familiar with this tool because it is used very often. For instance, if you move a file from a home directory to this web root, it will not immediately gain the appropriate security context because the mv command retains the existing context (cp will make a new context because it is making a new file). For instance:

% echo "my file" >file.html% ls -Z file.html-rw-rw-r--. vdanen vdanen unconfined_u:object_r:user_home_t:s0 file.html% mv file.html /srv/www/foo.com/html/% ls -Z /srv/www/foo.com/html/-rw-rw-r--. vdanen vdanen unconfined_u:object_r:user_home_t:s0 file.html

Apache is not, by default, allowed to serve up user_home_t files, so any attempt to display this file via Apache will fail with denied access. restorecon is required to re-label the file so Apache can access it: # restorecon -v /srv/www/foo.com/html/file.htmlrestorecon reset /srv/www/foo.com/html/file.html context unconfined_u:object_r:user_home_t:s0->system_u:object_r:httpd_sys_content_t:s0

Now the security context of the file is correct.

Similarly, when going from Disabled mode to Permissive or Enforcing mode, SELinux will have to re-label the entire filesystem (effectively running “estorecon /) because contexts are not set at all when SELinux is disabled.

Once you wrap your head around these basics of SELinux, all of a sudden it is no more difficult to use than manipulating iptables firewall rules. Just as you would adjust your firewall to allow access to a new service, you adjust SELinux file contexts to allow applications and services to access them. Yes, it does require a little more work to set up, initially, but the security benefits are really quite useful, especially considering that the bulk of this kind of manipulation will only happen when initially setting up a system or adding new services. And getting into the habit of running restorecon on new files and directories as they are created isn’t any more difficult than using ls on them to double-check their permissions.

The next and final tip on SELinux will introduce us to SELinux logging, to detect access violations, and to some other basic SELinux commands.

Vincent Danen Vincent Danen works on the Red Hat Security Response Team and lives in Canada. He has been writing about and developing on Linux for over 10 years.

Ubuntu 11.04: Installation stumbling block and post-install impressions

Takeaway: Jack Wallen is surprised with a bit of a glitch during the Ubuntu 11.04 installation. Have you had the same experience? Read on to see his final conclusion with the final release of Ubuntu 11.04.

Well, it has finally arrived. Probably the single most controversial Linux release to date. Ubuntu 11.04 — Natty Narwhal. The distribution that dared to buck the trends and go with it’s very own desktop that everyone said would fail. Rumor has it that even as near as a month prior to release, the Ubuntu developers wanted to can Unity and go back to standard GNOME — but the board voted them down.

“Controversy,” as Prince sang.

I had plenty of testing under my belt with the alpha and beta releases and decided Natty should be labeled (to that point) a huge success. There was so much to love about the new desktop. The whole idea behind Unity was to unify the desktop so that everything was seamless and made every aspect about simplicity. And Unity succeeded in doing just that. But how did the transition from beta to full release turn out? If the state of the beta was any indicator as to how well the full release would perform, the Ubuntu audience was in for a real treat. Did it deliver?

No.

There’s your answer. In its simplest form. But why? How could something go from doing so well in beta form to not doing so well in the full release? In a word — installation.

I had planned on migrating from Ubuntu 10.10 to 11.04 on my primary desktop. I was all prepared. Everything was backed up to an external drive, I had the standard list of applications that had to be installed immediately (GnuCash, OpenShot Video Editor, The GIMP, Chromium Browser, Claws Mail, Lucky Backup, and guvcview). I was ready. So I downloaded the ISO, burned it onto disc, and rebooted my machine.

Generally, I run the Live version of the distribution and then install from there. For Ubuntu you can choose to try it out (run the Live CD) or just immediately install the distribution. I decided to go my usual route and clicked the Try It Out button.

Nothing.

The initial screen just sat there, doing nothing. I thought that odd, so I rebooted (thinking it was just a fluke). When the initial screen popped up again, I clicked the Try It Out button and, once again, was greeted with a big squadoosh. Thinking maybe it was a bum disc or ISO, I downloaded a second copy, burned it again…

And had the same results.

So, I tried it on another machine (both machines are Shuttle PCs. The main machine is beefier, with a better NVidia graphics chipset.) with the same results. This is interesting seeing as how my secondary machine was already running Kubuntu 11.04 flawlessly. But I am never one to give up. I rebooted the secondary machine one more time and, instead of pressing the Try It Out button, I clicked the Install button. That worked fine and moved on to the next step. But I (being of the curious nature) wanted to find out something. Instead of clicking the Forward button, I clicked the Back button, which returned me to the original screen. This time I clicked the Try It Out button and, can you imagine my surprise when, it worked! The Live CD booted up and Ubuntu 11.04 was running.

Curious.

I decided to try the same thing with the main machine. It worked…but…when the desktop booted, it booted to the standard GNOME. No matter what I tried, I couldn’t get Unity to run. I know, with 100% certainty, the main machine has the hardware to run Unity 3D (It runs Compiz perfectly), so it wasn’t a hardware issue. But try as a might, Unity would not run on the primary machine (at least not with full installation.)

Now, here’s why I say Ubuntu 11.04 doesn’t deliver. Most users who want to try Ubuntu or Linux for the first time aren’t going to jump through the hoops that I did to get it running. In fact, if a new-to-Linux user had the experience I had, they might well have already run back to Windows or OS X. I realize this might well just be an NVidia issue, since I have Ubuntu 11.04 running fine on an Intel-based laptop. But NVidia is a fairly common chipset, so a lot of users are going to have these same issues. Imagine if all NVidia users run into the same issue as I had…will they even bother?

Another tiny issue (which won’t affect that many users) is the encrypted home directory. I really like this feature, but after adding it to the installation the request to enter the encryption key is far from obvious. In fact, it’s quite easy to over look this. Forget to create an encryption key and things are going to get dicey.

Once the installation was complete, things went back to the normal, smooth Ubuntu experience. But up until that point, things simply weren’t what I had expected. I have to admit, I have done my fair share of waffling on the whole Ubuntu Unity issue, but that is not where my problem is for the final release of Ubuntu 11.04. Once running, Ubuntu Unity is a really great desktop. But if these installation issues aren’t ironed out, Ubuntu is going to find itself losing ground.

Also, for anyone expecting to configure any desktop effects, you’re going to have to install some software. It was said that Unity would be using Compiz as a compositor, but by default there is no way to configure Compiz. To do this, the Compiz Configuration Settings Manager (ccsm) package must be installed. Also, don’t expect to run the Compiz Cube, as it can not be loaded so long as the Unity plug-in is running.

Other than that (and the that being an incredibly minor issue), once the desktop is up and running it is quite good. So for those that do make it through the installation woes, the final result is well worth the trouble. Ubuntu 11.04 is an outstanding platform for all levels of user. For a first “official” release, Ubuntu Unity might well be the only desktop worthy of production desktop at such an early age. Unity makes you feel like you’re using a GNOME-like desktop, with the speed of a much lighter-weight environment (like Fluxbox or Enlightenment.)

At this point however, I would make the case that the overall experience with Kubuntu 11.04 has been far and away better. So, if you’re looking for a new release that is easy to install, and offers an amazing desktop experience, go with Kubuntu 11.04. If you’re looking for something a bit different, that might well be the future of the PC desktop (as well as the most likely candidate for Linux tablet interface) go with Ubuntu 11.04. Either way you can’t lose (unless you can’t get beyond the Try It Out button of course.)

Jack Wallen A writer for over 12 years, Jack's primary focus is on the Linux operating system and its effects on the open source and non-open source communities.

Importing iptables Configurations Into Firewall Builder


Firewall Builder is a firewall configuration and management GUI that supports configuring a wide range of firewalls from a single application. Supported firewalls include Linux iptables, BSD pf, Cisco ASA/PIX, Cisco router access lists and many more. The complete list of supported platforms along with downloadable binary packages and soure code can be found at http://www.fwbuilder.org.


Import of existing iptables configurations was greatly improved in the recently released Firewall Builder V4.2. Features like object de-duplication and expanded rules recognition make it even easier to get started using Firewall Builder to manage your iptables configurations.


For this tutorial we are going to import a very basic iptables configuration from a firewall that matches the diagram shown below.



Firewall Builder imports iptables configs in the format of iptables-save. Script iptables-save is part of the standard iptables install and should be present on all Linux distribution. Usually this script is installed in /sbin/.


When you run this script, it dumps the current iptables configuration to stdout. It reads iptables rules directly form the kernel rather than from some file, so what it dumps is what is really working right now. To import this into Firewall Builder, run the script to save the configuration to a file:

iptables-save > linux-1.conf


As you can see in the output below, the example linux-1.conf iptables configuration is very simple with only a few filter rules and one nat rule.

# Completed on Mon Apr 11 21:23:33 2011
# Generated by iptables-save v1.4.4 on Mon Apr 11 21:23:33 2011
*filter
:INPUT DROP [145:17050]
:FORWARD DROP [0:0]
:OUTPUT DROP [1724:72408]
:LOGDROP - [0:0]
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i eth1 -s 10.10.10.0/24 -d 10.10.10.1/32 -p tcp -m tcp --dport 22 -m state --state NEW -j ACCEPT
-A FORWARD -m state --state RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o eth0 -s 10.10.10.0/24 -p tcp -m tcp --dport 80 -m state --state NEW -j ACCEPT
-A FORWARD -o eth0 -s 10.10.10.0/24 -p tcp -m tcp --dport 443 -m state --state NEW -j ACCEPT
-A FORWARD -j LOGDROP
-A LOGDROP -j LOG
-A LOGDROP -j DROP
COMMIT
# Completed on Mon Apr 11 21:23:33 2011
# Generated by iptables-save v1.4.4 on Mon Apr 11 21:23:33 2011
*nat
:PREROUTING ACCEPT [165114:22904965]
:OUTPUT ACCEPT [20:1160]
:POSTROUTING ACCEPT [20:1160]
-A POSTROUTING -s 10.10.10.0/24 -o eth0 -j MASQUERADE
COMMIT
# Completed on Mon Apr 11 21:23:33 2011


If you are running Firewall Builder on a different system than the one that is running iptables copy the file linux-1.conf from the firewall to the system where Firewall Builder is running.


Launch the Import wizard by selecting the File -> Import Firewall menu item.


Click Browse to find the file named linux-1.conf.



Click the Continue button to move to the next step of the import process.


The next window shows a preview of the configuration file that will be imported and the type of firewall that Firewall Builder has detected it to be.



Next you need to enter a name for the firewall. This is the name that will be used in Firewall Builder to refer to the firewall after it is imported. When you click the Commit button the configuration data will be read.


By default, Firewall Builder attempts to detect if there are items, like IP addresses, used in the rules that match existing items in the object tree. If there is a match the existing item is used, if there is no match a new object is created. This feature can be disabled by unchecking the box next to "Find and use existing objects" which will result in objects being created for evry item used in the imported rules regardless of whether it already exists in the object tree or not.



After the import is complete, Firewall Builder displays a log showing all the actions that were taken during the import. Warning messages are displayed in blue font and error messages are displayed in red.



The program tries to interpret the configuration file rule by rule and recreates the equivalent rule in Firewall Builder. Note that rules imported into Firewall Builder may not always be optimized since features like defining multiple source and/or destinations are supported by Firewall Builder, but not by iptables.


The progress window displays warning and error messages, if any, as well as some diagnostics that shows network and service objects created in the process.


As you can see from the import process log, Firewall Builder detected that there are rules in the iptables configuration that allow RELATED and ESTABLISHED traffic through the firewall. This behavior can be controlled by a setting in Firewall Builder, so a warning message is shown.


Click the Done button to complete the firewall import. Next we will go through some common post-import actions.

Importing iptables Configurations Into Firewall Builder - Page 2

How To Upgrade DRBD Userland Version To 8.3.9 Under OpenSUSE 11.4

When you try to run a cluster server with corosync, drbd, ocfs2 and pacemaker, and you try to install default drbd package under OpenSUSE 11.4 through yast or zypper, you may face the same problem as me:

Click to enlarge

The system reports that:

Starting DRBD resources: DRBD module version: 8.3.9
userland version: 8.3.8
you should upgrade your drbd tools!

and you cannot go further. We either wait till the OpenSUSE community releases new drbd packages which match the kernel-builtin module, or download drbd source code and upgrade drbd userland to 8.3.9 based on kernel source tree.

This documentation shows you how to compile the drbd package 8.3.9 based on the OpenSUSE 11.4 kernel source tree (we do need to build the kernel source tree, but we do not need to recompile the kernel). It is for test purposes only, and it works for my working environment and I cannot guarantee that this works for you.

Please contact me if you have any questions: wintel2006@hotmail.com. Thanks.

In this tutorial I will run 2 OpenSUSE 11.4 32-bit servers under VMware Workstation, both servers were built from OpenSUSE 11.4 live-CD, which you can download from http://www.opensuse.org.

Both servers have 2 disks:

/dev/sda: OpenSUSE system OS;

/dev/sdb: for DRBD only

Server names and network addresses:

drbd1: 192.168.5.129

drbd2: 192.168.5.137

zypper install kernel-source gcc flex make

Click to enlarge

After installation, you will see:

Click to enlarge

Before we do any work, we need to work on the kernel first, now switch to /usr/src/linux, and create a copy of the .config file of the current-running kernel:

cd /usr/src/linux
cp /boot/config-2.6.37.1-1.2-desktop ./.config

Click to enlarge

To make menuconfig compile the kernel, we need the ncurses-devel package:

zypper install ncurses-devel

Click to enlarge

And now run make menuconfig:

make menuconfig

In the following screen, highlight "Load an Alternate Configuration File", and click enter:

Click to enlarge

In the new window, .config file is auto selected; now click enter:

Click to enlarge

Select exit, and save the changes. Now the kernel source tree is ready to be used for compiling the drbd source code.

Click to enlarge

Click to enlarge How To Upgrade DRBD Userland Version To 8.3.9 Under OpenSUSE 11.4 - Page 2

Friday, May 27, 2011

How To Upgrade From Fedora 14 To Fedora 15 (Desktop & Server)

This article describes how you can upgrade your Fedora 14 system to Fedora 15. The upgrade procedure works for both desktop and server installations.


I do not issue any guarantee that this will work for you!


The commands in this article must be executed with root privileges. Open a terminal (on a Fedora 14 desktop, go to Applications > System Tools > Terminal) and log in as root, or if you log in with a regular user, type

su


to become root.


Please make sure that the system that you want to upgrade has more than 600 MB of RAM - otherwise the system might hang when it tries to reboot with the following message (leaving you with an unusable system):

Trying to unpack rootfs image as initramfs...


First we must upgrade the rpm package:

yum update rpm


Then we install the latest updates:

yum -y update


Next we clean the yum cache:

yum clean all


If you notice that a new kernel got installed during yum -y update, you should reboot the system now:

reboot


(After the reboot, log in as root again, either directly or with the help of

su


)


Now we come to the upgrade process. We can do this with preupgrade (preupgrade will also take care of your RPMFusion packages).


Install preupgrade...

yum install preupgrade


... and call it like this:

preupgrade


The preupgrade wizard will then start on your desktop. Select Fedora 15 (Lovelock). Afterwards the system is being prepared for the upgrade.


At the end, click on the Reboot Now button.


During the reboot, the upgrade is being performed. This can take quite a long time, so please be patient.


Afterwards, you can log into your new Fedora 15 desktop.


First we must upgrade the rpm package:

yum update rpm


Then we install the latest updates:

yum -y update


Next we clean the yum cache:

yum clean all


If you notice that a new kernel got installed during yum -y update, you should reboot the system now:

reboot


(After the reboot, log in as root again, either directly or with the help of

su


)


Now we come to the upgrade process. We can do this with preupgrade.


Install preupgrade...

yum install preupgrade


... and call it like this:

preupgrade-cli


It will show you a list of releases that you can upgrade to. If all goes well, it should show something like Fedora 15 (Lovelock) in the list:

[root@server1 ~]# preupgrade-cli
Loaded plugins: blacklist, langpacks, whiteout
No plugin match for: rpm-warm-cache
No plugin match for: remove-with-leaves
No plugin match for: auto-update-debuginfo
Adding en_US to language list
Loaded plugins: langpacks, presto, refresh-packagekit
Adding en_US to language list
please give a release to try to pre-upgrade to
valid entries include:
"Fedora 15 (Lovelock)"
[root@server1 ~]#


To upgrade, append the release string to the preupgrade-cli command:

preupgrade-cli "Fedora 15 (Lovelock)"


Preupgrade will also take care of your RPMFusion packages, so all you have to do after preupgrade has finished is to reboot:

reboot


During the reboot, the upgrade is being performed. This can take quite a long time, so please be patient. Afterwards, you can log into your new Fedora 15 server.


Paravirtualization With Xen On CentOS 5.6 (x86_64)

This tutorial provides step-by-step instructions on how to install Xen (version 3.0.3) on a CentOS 5.6 (x86_64) system.


Xen lets you create guest operating systems (*nix operating systems like Linux and FreeBSD), so called "virtual machines" or domUs, under a host operating system (dom0). Using Xen you can separate your applications into different virtual machines that are totally independent from each other (e.g. a virtual machine for a mail server, a virtual machine for a high-traffic web site, another virtual machine that serves your customers' web sites, a virtual machine for DNS, etc.), but still use the same hardware. This saves money, and what is even more important, it's more secure. If the virtual machine of your DNS server gets hacked, it has no effect on your other virtual machines. Plus, you can move virtual machines from one Xen server to the next one.


I will use CentOS 5.6 (x86_64) for both the host OS (dom0) and the guest OS (domU).


This howto is meant as a practical guide; it does not cover the theoretical backgrounds. They are treated in a lot of other documents in the web.


This document comes without warranty of any kind! I want to say that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!


This guide will explain how to set up image-based virtual machines and also LVM-based virtual machines.


Make sure that SELinux is disabled or permissive:

vi /etc/sysconfig/selinux

# This file controls the state of SELinux on the system.# SELINUX= can take one of these three values:# enforcing - SELinux security policy is enforced.# permissive - SELinux prints warnings instead of enforcing.# disabled - SELinux is fully disabled.SELINUX=disabled# SELINUXTYPE= type of policy in use. Possible values are:# targeted - Only targeted network daemons are protected.# strict - Full SELinux protection.SELINUXTYPE=targeted

If you had to modify /etc/sysconfig/selinux, please reboot the system:

reboot


To install Xen, we simply run

yum install kernel-xen xen


This installs Xen and a Xen kernel on our CentOS system.


Before we can boot the system with the Xen kernel, please check your GRUB bootloader configuration. We open /boot/grub/menu.lst:

vi /boot/grub/menu.lst


The first listed kernel should be the Xen kernel that you've just installed:

[...]title CentOS (2.6.18-238.9.1.el5xen) root (hd0,0) kernel /xen.gz-2.6.18-238.9.1.el5 module /vmlinuz-2.6.18-238.9.1.el5xen ro root=/dev/VolGroup00/LogVol00 module /initrd-2.6.18-238.9.1.el5xen.img[...]

Change the value of default to 0 (so that the first kernel (the Xen kernel) will be booted by default):


The complete /boot/grub/menu.lst should look something like this:

# grub.conf generated by anaconda## Note that you do not have to rerun grub after making changes to this file# NOTICE: You have a /boot partition. This means that# all kernel and initrd paths are relative to /boot/, eg.# root (hd0,0)# kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00# initrd /initrd-version.img#boot=/dev/sdadefault=0timeout=5splashimage=(hd0,0)/grub/splash.xpm.gzhiddenmenutitle CentOS (2.6.18-238.9.1.el5xen) root (hd0,0) kernel /xen.gz-2.6.18-238.9.1.el5 module /vmlinuz-2.6.18-238.9.1.el5xen ro root=/dev/VolGroup00/LogVol00 module /initrd-2.6.18-238.9.1.el5xen.imgtitle CentOS (2.6.18-238.el5) root (hd0,0) kernel /vmlinuz-2.6.18-238.el5 ro root=/dev/VolGroup00/LogVol00 initrd /initrd-2.6.18-238.el5.img

Afterwards, we reboot the system:

reboot


The system should now automatically boot the new Xen kernel. After the system has booted, we can check that by running

uname -r

[root@server1 ~]# uname -r
2.6.18-238.9.1.el5xen
[root@server1 ~]#


So it's really using the new Xen kernel!


We can now run

xm list


to check if Xen has started. It should list Domain-0 (dom0):

[root@server1 ~]# xm list
Name                                      ID Mem(MiB) VCPUs State   Time(s)
Domain-0                                   0     3343     2 r-----     18.1
[root@server1 ~]#

Paravirtualization With Xen On CentOS 5.6 (x86_64) - Page 2

Setting Up A Spam-Proof Home Email Server (The Somewhat Alternate Way) (Debian Squeeze)

Email spam is a huge problem. I have found for myself quite a simple solution, however it'll take some time to "migrate" completely over to it.

The solution is to create a unique email address everytime I have to give an email address to someone else or to some website to sign up. If I want an account at twitter, I'd use "www.twitter.com@MYDOMAIN.COM". For webbased services, I use the full domain name incl. subdomain (www) on the left of the @ (some poorly designed websites do not recognizes the www. as valid email address, for those I just leave it away).

For people I use a format like that: "email.john.doe@MYDOMAIN.COM". You could also use like "from.john.doe@MYDOMAIN.COM". The good thing is, the left side of the @ for email addresses is almost "unlimited".

Because I generate unique email addresses for every contact, I can easily find out where my email address got leaked and then I can easily remove it.

This howto will set up a full functioning email server with according scripts to make easy email management. It includes also the DNS setup part - even if you are on a dynamic address - e.g. if you want to run your own little mailserver from home.

In this howto I use Debian Squeeze as server. For other Linux distros you'll have to make according changes yourself.

A short summary of what is being done in this howto is like this:

Obtaining a domain nameTaking care of a dynamic ip - if necessaryTaking care of the dns and routingSetting up postfixSetting up procmailSetting up dovecotSetting up webserver for email address managementSetting up Thunderbird with addon

In this howto I also relay on a few other howtos here - especially regarding the setup of the bind and email server. For those, I copied more or less from Falko's Perfect Debian Server howtos. Also the email relay section was borrowed from a howto here by sjau. Without those, I probably would not have been able to set this up.

Before you can start running your own mailserver you need a domain name for which you can also set MX records. I don't want to make any suggestion as there are tons and tons of domain registrars out there. One of the cheapest I know of is GoDaddy.

I don't use GoDaddy myself but as far as I've heard they provide a solid service.

Another challenge to be facing is how to handle things on a dynamic ip address. If you don't have a dedicated box rented somewhere but use your home internet connection, then very likely you have not a static ip.

In the web it's essential to have a static ip so that others always know where to reach you. However there are services that help you with that.

One of the services I use is EveryDNS. They let me host the DNS for the domain name.

As of now they still offer the service for free. Although they got bought up in 2010, the promise was, that then-customers who have donated money, can also in future use the system for free. On their webpage, they don't mention anything yet that new customers need to pay - but I don't know for sure.

The reason for EveryDNS is, that they offer also a little perl script that can be used to update the DNS. This is essential as your IP changes over time if you don't have a static IP address. You can get the perl script from here.

When you have a domain name then first go to What Is My IP. It will show you your current public ip address. Then create an account at EveryDNS and make at least the following entries where MYDOMAIN.COM would be your domain name:

(1) Make an "A" record type, set as fully qualified domain name "MYDOMAIN.COM" and set as value your public ip address

(2) Make a "CNAME" record type, set as fully qualified domain name "*.MYDOMAIN.COM" and set as "MYDOMAIN.COM"

(3) Make a "MX" record type, set as fully qualified domain name "MYDOMAIN.COM", set as value "MYDOMAIN.COM" and set as "MX Value" "10"

What we just did is setup the DNS for the domain. The main domain is found at your IP address (a-record), all other domains are also found there (the * in the cname record pointing to the main domain) and we also operate a mail server there (mx record).

The following things are being done as root user - unless told otherwise.

As said before, if you don't have a static IP address you will need to regurarly update the DNS info.

cd /root
wget http://www.everydns.net/eDNS.pl
chmod 0755 eDNS.pl

touch eDNS.sh
echo "#!/bin/bash" > eDNS.sh
echo "perl /root/eDNS.pl -u USERNAME -p PASSWORD -d MYDOMAIN.COM" >> eDNS.sh

Replace USERNAME and PASSWORD with your everydns login credentials.

I love to work with a cron.txt file that contains all crons. I think it's a lot simpler to maintain it like that.

First you have to check out if there is already a cron entry:

crontab -l

If there is no cron entry yet, then just run the following commands

touch cron.txt
chmod 0700 cron.txt
echo "*/5 * * * * /root/eDNS.sh >/dev/null 2>&1" > cron.txt

If there are already cron entries, copy them, create a cron.txt file and insert them and add the following command also:

*/5 * * * * /root/eDNS.sh >/dev/null 2>&1

Now we load the cron.txt as cron:

crontab cron.txt

And we check if it was added properly:

crontab -l

The next problem we're facing then is how to resolve the domain in your lan. If your mail server is behind a router then I will probably have a local ip like 192.168.0.x or 10.0.0.x.

If you are behind a router, you will need to forward the following ports to your server: 25, 80, 143, 443, 991. There could be more ports required like 587.

Also we face the problem on how to resolve the domain name from inside the lan. From outside the lan you have the DNS entry that should point to your current IP address. However when you are inside the lan and make a dns query it will only return your public ip and usuall it will fail then.

There are several solutions for that problem - if the problem even exists at all.

One way would be the use of dnsmasq in routers (e.g. dd-wrt or tomato-wrt). However as I can't guarantee for it to work, the only other option I see is to setup a full fledged DNS server on your mailserver.

In this tutorial I'll use a chrooted Bind9 as I am the most familiar with it. For other DNS servers you'll find plenty of documentation online.

apt-get install bind9
/etc/init.d/bind9 stop

OPTIONS="-u bind -t /var/lib/named" mkdir -p /var/lib/named/etc
mkdir /var/lib/named/dev
mkdir -p /var/lib/named/var/cache/bind
mkdir -p /var/lib/named/var/run/bind/run

mv /etc/bind /var/lib/named/etc

ln -s /var/lib/named/etc/bind /etc/bind

mknod /var/lib/named/dev/null c 1 3
mknod /var/lib/named/dev/random c 1 8
chmod 666 /var/lib/named/dev/null /var/lib/named/dev/random
chown -R bind:bind /var/lib/named/var/*
chown -R bind:bind /var/lib/named/etc/bind

$AddUnixListenSocket /var/lib/named/dev/log /etc/init.d/rsyslog restart
/etc/init.d/bind9 start

and check /var/log/syslog for errors.

Now we have setup Bind9 in a chrooted environment. The next thing to do is to acutally add a zonefile for your domain.

zone "MYDOMAIN.COM" IN { type master; file "/etc/bind/zones/MYDOMAIN.COM.db"; allow-update { none; };};mkdir /etc/bind/zones
touch /etc/bind/zones/MYDOMAIN.COM.db
chown -R bind:bind /etc/bind/zones/MYDOMAIN.COM.db

$TTL 86400@ IN SOA @ MYDOMAIN.COM. ( 1 ; serial 2600 ; refresh 15M ; retry 3600 ; expiry 360 ) ; minimum@ IN NS ns.MYDOMAIN.COM.ns IN A LOCALIPwww IN A LOCALIPMYDOMAIN.COM. IN A LOCALIPMYDOMAIN.COM. IN MX 10 LOCALIP

Of course replace MYDOMAIN.COM with your actual domain name and LOCALIP with your static LAN IP address. Bascially we tell here that the nameserver for that domain is hosted on "ns.MYDOMAIN.COM" and "ns.MYDOMAIN.COM" is to be found at the static local ip address.

/etc/init.d/bind9 restart

While Bind9 is setup now, there's one last thing to do. On your router you have to change the nameserver resolution order. The first nameserer must now be your "mail server" with the according static local ip. Otherwise the whole bind9 setup was for nothing. As the second nameserver enter the value of what was aleady in there as first one. Depending on your router it can be a bit trickier.

Setting Up A Spam-Proof Home Email Server (The Somewhat Alternate Way) (Debian Squeeze) - Page 2

Fedora 14 Samba Standalone Server With tdbsam Backend

This tutorial explains the installation of a Samba fileserver on Fedora 14 and how to configure it to share files over the SMB protocol as well as how to add users. Samba is configured as a standalone server, not as a domain controller. In the resulting setup, every user has his own home directory accessible via the SMB protocol and all users have a shared directory with read-/write access.


I do not issue any guarantee that this will work for you!


I'm using a Fedora 14 system here with the hostname server1.example.com and the IP address 192.168.0.100.


Please make sure that SELinux is disabled as shown in chapter 5 of this tutorial: The Perfect Server - Fedora 14 x86_64 [ISPConfig 2] - Page 3


Connect to your server on the shell and install the Samba packages:

yum install cups-libs samba samba-common


Edit the smb.conf file:

vi /etc/samba/smb.conf


Make sure you see the following lines in the [global] section:

[...]# ----------------------- Standalone Server Options ------------------------## security = the mode Samba runs in. This can be set to user, share# (deprecated), or server (deprecated).## passdb backend = the backend used to store user information in. New# installations should use either tdbsam or ldapsam. No additional configuration# is required for tdbsam. The "smbpasswd" utility is available for backwards# compatibility.# security = user passdb backend = tdbsam[...]

This enables Linux system users to log in to the Samba server.


Then create the system startup links for Samba and start it:

chkconfig --levels 235 smb on
/etc/init.d/smb start


Now I will add a share that is accessible by all users.


Create the directory for sharing the files and change the group to the users group:

mkdir -p /home/shares/allusers
chown -R root:users /home/shares/allusers/
chmod -R ug+rwx,o+rx-w /home/shares/allusers/


At the end of the file /etc/samba/smb.conf add the following lines:

vi /etc/samba/smb.conf

[...][allusers] comment = All Users path = /home/shares/allusers valid users = @users force group = users create mask = 0660 directory mask = 0771 writable = yes

If you want all users to be able to read and write to their home directories via Samba, add the following lines to /etc/samba/smb.conf (make sure you comment out or remove the other [homes] section in the smb.conf file!):

[...][homes] comment = Home Directories browseable = no valid users = %S writable = yes create mask = 0700 directory mask = 0700

Now we restart Samba:

/etc/init.d/smb restart


In this example, I will add a user named tom. You can add as many users as you need in the same way, just replace the username tom with the desired username in the commands.

useradd tom -m -G users


Set a password for tom in the Linux system user database. If the user tom should not be able to log into the Linux system, skip this step.

passwd tom


-> Enter the password for the new user.


Now add the user to the Samba user database:

smbpasswd -a tom


-> Enter the password for the new user.


Now you should be able to log in from your Windows workstation with the file explorer (address is \\192.168.0.100 or \\192.168.0.100\tom for tom's home directory) using the username tom and the chosen password and store files on the Linux server either in tom's home directory or in the public shared directory.


The Perfect Server - Ubuntu Natty Narwhal (Ubuntu 11.04) [ISPConfig 2]

This tutorial shows how to set up an Ubuntu Natty Narwhal (Ubuntu 11.04) server that offers all services needed by ISPs and hosters: Apache web server (SSL-capable), Postfix mail server with SMTP-AUTH and TLS, BIND DNS server, Proftpd FTP server, MySQL server, Courier POP3/IMAP, Quota, Firewall, etc. In the end you should have a system that works reliably, and if you like you can install the free webhosting control panel ISPConfig 2 (i.e., ISPConfig runs on it out of the box).


I will use the following software:

Web Server: Apache 2.2.17 with PHP 5.3.5, Python, Ruby, and WebDAV Database Server: MySQL 5.1.54Mail Server: PostfixDNS Server: BIND9FTP Server: proftpdPOP3/IMAP: I will use Maildir format and therefore install Courier-POP3/Courier-IMAP.Webalizer for web site statistics

Please note that this setup does not work for ISPConfig 3! It is valid for ISPConfig 2 only!


I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!


To install such a system you will need the following:


In this tutorial I use the hostname server1.example.com with the IP address 192.168.0.100 and the gateway 192.168.0.1. These settings might differ for you, so you have to replace them where appropriate.


Insert your Ubuntu install CD into your system and boot from it. Select your language:


Click to enlarge

Then select Install Ubuntu Server:


Click to enlarge

Choose your language again (?):


Click to enlarge

Then select your location:


Click to enlarge

Click to enlarge

Click to enlarge

If you've selected an uncommon combination of language and location (like English as the language and Germany as the location, as in my case), the installer might tell you that there is no locale defined for this combination; in this case you have to select the locale manually. I select en_US.UTF-8 here:


Click to enlarge

Choose a keyboard layout (you will be asked to press a few keys, and the installer will try to detect your keyboard layout based on the keys you pressed):


Click to enlarge

Click to enlarge

The installer checks the installation CD, your hardware, and configures the network with DHCP if there is a DHCP server in the network:


Click to enlarge

Click to enlarge The Perfect Server - Ubuntu Natty Narwhal (Ubuntu 11.04) [ISPConfig 2] - Page 2