This is the first part of the jimbodude.net Linux tutorial. It is meant to introduce Linux to people who have a reasonable amount of technical experience but not enough time to spend reading books or doing research on their own. At the end of this document, you should understand what Linux is, how users and files are organized, how to navigate the filesystem, and how to manipulate files. I will not discuss how to install Linux, as that is different for each version of each distribution. See the documentation for the version you plan on using.
- 1 What is Linux?
- 2 Why Linux?
- 3 Using Linux
- 3.1 How to Connect to a Linux Machine
- 3.2 Understanding the Linux Filesystem
- 3.3 Understanding Users
- 3.4 Important Commands and Concepts
- 3.4.1 Navigating the Filesystem
- 3.4.2 Finding Files
- 3.4.3 Manipulating Files
- 3.4.4 Text Editors
- 3.4.5 The "su" command
- 3.4.6 The "sudo" command
- 3.4.7 About Commands
- 3.4.8 Getting Help
- 4 Conclusion
- 5 Notes
- 6 See Also
What is Linux?
"Linux" generally refers to any operating system using the Linux Kernel. The Linux Kernel is available under the "GNU General Public License" which makes it free for most uses and allows it to be distributed as open source. Linux was developed by Linus Torvalds in 1991. For more information about Linux and its history see Wikipedia:Linux kernel and Wikipedia:Linux.
Linux was originally developed to be a desktop operating system for "hard core" users. It has since evolved and become available as a server operating system, which has emerged into many corporations, and as a user firendly desktop operating system. Linux provides a huge amount of flexibility for developers because it supports many programming languages and is available on many platforms other than computers. The main goal of the project was to provide a free Unix-like operating system to the masses.
Linux is a Monolithic kernel, which means that there is minimal layering within the kernel. This maximizes performance by eliminating excessive abstraction, but can cause some low-level problems if developers aren't careful. By design, everything running on a Linux system is very modular. Most components of a Linux system are designed to be interchangeable with other components that can provide the same functionality. Many components are not required to be installed or running for the system to operate, which saves resources for the things that do need to be running. This is true of many aspects of Linux software including graphical interfaces, network, server software, printing, and sound.
The Linux kernel has been ported to work on many common devices including routers (see DD-WRT for Linksys WRT routers), MP3 players (such as the Apple iPod), phones, and gaming systems (such as Sony PS2, Nintendo GameCube, Microsoft XBox, and XBox 360). Many embedded systems (such as TiVo and older Linksys routers) also use Linux based systems out of the box.
The term "Linux" technically only refers to the operating system kernel, but the kernel alone is relatively useless. So people put together more complete software packages which include applications and supporting software. These packages are called distributions. There are so many distributions of Linux available it isn't even funny, and each has its own benefits and disadvantages. I will only cover a few of the major ones here breifly. More information about each distribution is available from Wikipedia by clicking on the distribution title, and from each distribution's vendor by clicking on their listed website.
Slackware (http://www.slackware.com/) is one of the earliest distributions. It was released in 1993. It is meant to contain only the most stable open source programs available and be as simple as possible. By simple, I do not mean simple to operate. The developers leave out many features that could hinder performance or create configuration problems. Software is organized in packages, but there is a pretty serious lack of package management tools. For instance, there is no easy way to determine if all an application's dependencies are installed. Packages are stored in simple tar-balls (like a zip file for you Windows folk), which is consistent with the minimalist attitude of Slackware. Technical people don't seem to think this is an issue, but the average user should beware.
Red Hat and Red Hat Enterprise
Probably one of the most famous distributions is Red Hat (http://www.redhat.com/). Red Hat 1.0 was released near the end of 1994. It has always been targeted at the server market. It includes a built in installer called Anaconda and the ability to script installation using kickstart. Software is organized in RPM packages which allow for fairly easy program installation and dependency checking. Red Hat now also uses yum to extend its package management. yum was inspired by Debian's apt which allows the user to run a single simple command to download, resolve dependencies for, and install packages. Red Hat Linux used to be available for free; the company made its money by selling service contracts and guaranteed uptime. Around 2003, Red Hat renamed its product from Red Hat Linux to Red Hat Enterprise and stopped distributing it for free.
Fedora Core (http://www.fedoralinux.org/) is a project sponsored by Red Hat and based on Red Hat Linux. The project was started at the same time as Red Hat stopped allowing free distribution of their software. The idea is for Fedora Core to be a completely open-source, general purpose operating system for home and recreational use; Red Hat Enterprise should be stable enough to be sold to corporations as a server operating system for profit. Most major functional items in Fedora Core are pretty much the same as Red Hat, except that more "cutting edge" and recreationally targeted technologies are available in Fedora Core. I used Fedora for about 5 years, and am still very interested in and supportive of the project.
Yellow Dog (http://www.yellowdoglinux.com) originally used Red Hat as a starting point, and now bases itself on Fedora Core. It was released in 1999 for PowerPC architecture and currently supports Macs, some IBM servers, and partially Sony Play Station 3. As a Red Hat/Fedora spin-off, Yellow Dog uses RPM package management. Yellow Dog costs between $30 and $90 depending on which package is chosen.
Debian Linux (http://debian.org) is most known for its strict policies on package quality and dependency structure. It was originally announced in late 1993, but its first stable release wasn't until 1996. Calling it a Linux distribution is a little misleading, as the project simultaneously develops on multiple kernels, however their only official releases use the Linux kernel exclusively. Debian is famous for its package management using the apt tool (Advanced Packaging Tool), which allows users to simply run a single command to download, resolve dependencies for, and install a package. Debian boasts over 15,000 pre-compiled packages, all kept in multiple, clearly labeled versions of development.
Ubuntu (http://www.ubuntu.com/) is a Debian based distribution focused on ease of use for the user. As such, many consider it a good distribution for Linux beginners to start with. It began in 2004 as a fork in the Debian project, but soon became its own entity. Ubuntu, like Debian, uses free software. Ubuntu developers generally try not to provide more than one package that does the same thing to avoid confusion and software conflicts, unlike Fedora and Debian which provide many packages with the same purpose. I currently use Ubuntu as my primary operating system for my laptop and media PC. I've been very impressed by it's ease of use and stability without significant loss of control and power.
SUSE (http://www.opensuse.org) originated in 1992 as a German translation of Slackware. It was one of the first distributions to feature the X server (GUI management) and TCP/IP networking. SUSE is currently owned by Novell, who has released their formerly proprietary rpm-based package update tool Yet Another Setup Tool (YaST2) to the public under GPL. YaST2 also manages disk partitions, system setup, network and firewall configuration, user administration, and much more. SUSE comes packaged with the ability to read and resize NTFS (Windows post-2000) partitions and supports drivers for winmodems and softmodems common in low-end or space confined systems - this feature is uncommon in other distributions. Several versions of SUSE are available depending on the anticipated use of the software.
Knoppix (http://www.knopper.net/knoppix) is a Debian based bootable CD distribution. In other words no installation is required. Once the system is booted with the Knoppix CD or DVD in the drive, the Linux kernel and all Knoppix additions decompress to a RAM drive and the operating system starts. Tools on the disc range from movie players to web browsers to file system recovery tools. Knoppix is good for saving your commonly corrupted Windows partitions, just playing with Linux at no risk, or scaring the pants of your buddy when he goes to the store and finds his computer running Linux when he gets back. It comes in a CD version and a DVD version with much more software and features.
There are a lot of reasons you should consider Linux. Here's an overview of the main important points. There's a whole website dedicated to this question called whyLinuxIsBetter.net. LinuxOnline.org also maintains their own list of reasons here. Check them out if this doesn't convince you - which is unlikely...
As mentioned earlier, Linux designers prefer a far more segmented approach to the application hierarchy. For this reason, it is possible for programs to have catastrophic failures but still not affect other processes including the all important kernel process. I can honestly say I have never seen a program capable of crashing the kernel during normal operation - and I've seen some pretty ridiculous program crashes.
The Linux file system, EXT, does not get fragmented and is very robust when it comes to corruption. I won't get into the technical details here, just a quick overview and some story telling. You don't ever need to defragment your drive, and if you experience a mild hardware failure, as I have, your data is much more likely to survive if it's on an EXT partition than on a FAT or NTFS (Windows) partition. I've actually been running with a defunct hard drive for about 2 months now. It has a very high read/write error rate which is reflected in the drive's SMART status. It costs a little bit more processor power to run with it, but it's still working. I've only had to correct the file system structure twice - both times the system was still operational, just exceptionally slow to respond. Each repair took only one command, which the system automatically told me needed to be run at boot time, and less than 10 minutes of that command doing its thing. Of course, this is an extreme case, and I will be replacing the drive very soon. The bottom line is it's robust, and you probably won't have any problems with it unless your hardware dies on you.
Hate not having that one little program to open some special file? Maybe it's some really old file format, or maybe someone is using a special program that you don't have and can't afford. Linux has all the special file openers you'd ever want plus a bunch of others you've never heard of. Unlike Windows or Mac where someone is out to make some money off of your need, the Linux community generally shares their programs freely. If someone had a need before you, there's likely to be something out there to solve your problem.
Cost and Availability
FREE. That should be about all I have to say, but for completeness, I'll continue. Most Linux distributions are free for non-commercial use. The commercial packages, which usually include support contracts and uptime guarantees, do cost money, but generally the cost is less than a Windows Server package - which doesn't really say much knowing the costs of Microsoft products...
Linux distributions are also available for download from thousands of mirror servers all over the world, so it's easy to obtain - you don't even need to leave your chair if you don't want.
Not only is the software cheaper, but so is the hardware. You can run a basic Linux system on something running at about 300Mhz with 128MB of slow ram and a 2 gig hard drive. (No, this document was not written in 1990 - it's almost 2008 and this fact still holds true!) It won't be the fastest thing you've ever used, and it will have some limitations if you're looking into GUI interfaces or server applications, but it will work. Linux doesn't require the huge amount of hardware needed by other operating systems. For example, the recommended system spec for Windows Vista is:
- 1 Ghz processor
- 1 GB RAM
- 40 GB drive with 15 GB free
- DirectX 9 capable graphics card
- DVD-ROM drive
Compared to the Fedora Core 6 recommended requirements for an installation with graphical interface:
- 400 MHz Processor
- 192 MB RAM
- Maximum of 9 GB free (in my experience, most installations take about 5-7 GB)
- 5% of drive free after installation
Keep in mind these are the requirements just for the operating system, if you plan to do anything above and beyond, you'll need more, but with that same recommended spec Vista system, you have a pretty high powered Linux system - so, you can save hundreds on hardware. Looking for a use for that old machine that can barely run Windows XP now that Vista is out? You have yourself a candidate for a new Linux box! (See Migrating to Linux for an interesting dual-operating system implementation)
As mentioned earlier, Linux supports a huge variety of programming languages. Development tools are packaged with just about every distribution. The Linux community is famous for providing a huge amount of applications to their fellow Linux users. If you're aiming to become a developer, there's no better place to start. Just about everything is open source, so you can poke around and solve problems on your own. Submit a patch and have your work included in the next release. It's a proactive approach, no more waiting for the big corporation to try to recreate the problem and figure out a bug fix - pretty neat, very fast. If you're just a regular user, you can expect some really high grade applications to be coming your way because of the same principal. Some really nerdy guy (like me) decides he wants a new feature added to the software, so he adds it, then you can have the updated version with his changes. It's also very easy to get your own ideas heard on a project, in fact outside input is encouraged. This way the software can evolve based on what the users ask for, not just what the developers cook up at some corporate meeting. Tons of projects work like this - the possibilities are limitless.
Linux is not capable of getting other operating systems' viruses. In fact, there are very few viruses capable of running in a Linux environment. For that reason, it is completely unnecessary to run any sort of virus protection for a Linux system. This saves tons of effort and computing resources.
The Linux user structure also provides a significant amount of protection to the system. No users' processes can alter any important system files. Remember that all processes require a system user account to run as, even if a human user didn't start them. This means that only programs run as root can alter important system files. We'll cover users and user security in a little bit more detail later in this article.
Although Linux can be configured to be accessed from anywhere over many different protocols, there is almost always a secure way to make the connection. Services like Windows networking and Microsoft's Remote Desktop Connection have been shown to have major flaws if not protected by strong firewalls. SSH, X, and VNC all have options to encrypt transmissions automatically upon connection which negates this problem almost entirely. It's also possible to tunnel just about any connection through SSH, making connections to other services, such as SQL or corporate intranet servers, also encryptable without any additional configuration required.
Linux might not have the best support for the latest and greatest hardware, but you can bet that there will be support for just about everything you need including:
- Proprietary graphics drivers for your hot nVidia chip - nVidia has been proud to provide all their newest graphics cards with their proprietary Linux drivers to get the best performance possible for quite a few years
- Wireless networking
- Just about every hard drive adapter known to man - out of the box - no need for that stupid "Have Disk" option in the middle of your operating system installation
- Analog and digital sound input/output (see Linux Media for more information about getting AC-3 streams from your computer to those sweet surround sound speakers of yours)
- Multiple display output - including support for displays at different resolutions, TVs, digital, analog, whatever you want
- Tons of USB devices
- Tons of serial and IR devices
- TV Tuners
- FM Tuners
- Advanced network interfaces
- Lots and lots more cool toys you'd love to play with
Why Not Linux?
So, everything has its drawbacks right? Well, so does Linux.
- If you're a hardcore gamer, you're going to be disappointed. There are hundreds of Linux games available, and just about all of them are free, but few if any major titles make releases for the Linux platform, so you won't be playing the most advertised games.
- If you use specific software to get your work done, like Adobe's Photoshop or Acrobat products, Microsoft Office, etc., you might have a bit of trouble adjusting your work flow to Linux. There are Linux applications that provide much of the same functionality as these titles, but they often use their own file formats, which your boss won't be able to edit without the same software.
- If you're doing heavy design or graphics work, you'll find limitations quickly. There is little support for color correction and, as mentioned before, almost no major graphics and design applications are available for Linux.
Now, all that said, that's about the extent of Linux's bad points. If any of that applies to you, then you should do what I do and have a Windows machine for games and photo editing, and a Linux machine for everything else. It's funny that the Linux machine is a million times more useful at a third the cost...
There are also compatibility layers designed to be able to run software designed for other platforms. One of the most famous ones is WINE, a Windows compatibility layer for Linux. I won't be covering this sort of stuff any time soon because I don't have very much experience with it. I've heard from people that they were able to get some DirectX games to work in WINE with some effort, as well as many of the big name productivity titles. See the WINE application database for more information about what works and what doesn't.
If you're just looking at a Linux GUI, things don't appear much different from Microsoft Windows or MacOS or any other GUI environment, but under the hood there's a whole different story.
How to Connect to a Linux Machine
I will use the term connect very loosely to mean anything you do to the computer before you can make the computer do what you want. So don't think I'm only talking about remote connections here, I will discuss both remote and local (physical) connections.
The first thing to realize about Linux is that it is designed in a way that you can access any part of the system from anywhere if you configure it correctly. We won't cover configuration in this tutorial, but probably in a later one. For now, I'm going to assume you are connecting to a properly configured machine - like your school or work. If everything is set up right, you can pull up a GUI or shell locally or over the Internet without much effort.
Shell and SSH
The shell is the most basic connection you can make to a Linux machine. It's a text environment where commands are typed and the response from the computer is that familiar 16 color capable interface - think DOS, but actually useful. Many new users are scared of shells, but don't worry, it's not nearly as bad as it seems. You might not believe it now, but once you get the hang of it working from the shell is usually more efficient and easier than working from the GUI.
If you're connecting remotely you're going to need an SSH client, like putty or the standard ssh Linux command line tool, and the host name or IP address of the computer you want to connect to, otherwise you can just use the local keyboard. Putty is nice because there's generally no installation - very simple - but it has a lot of the neat SSH features, like port tunneling, which I won't discuss here. If you're using a remote connection, type the host name or IP into the SSH client and tell it to connect.
Whether connecting remotely or locally, you will need to enter a username and password. Once that's done, you will be presented with something like this:
This station is for interactive use [user@computer ~]$
This means that the system is ready to start taking commands. The '~' is where the current working directory is displayed. You will usually start out in your home directory, which is often denoted by '~'. The '$' means that you are not the root user. It turns to a '#' when you are. I will discuss root user, directory structure, and some basic shell commands later in this article.
It's also important to note that you can start a shell (sometimes called a terminal) from a graphical session. It will look and act the same as if you logged directly into a shell, but it will be in a window instead of a full screen - much like Windows' "command prompt", except you actually want to work with it.
X Server (locally)
Sit down, type in username and password, done. Just like accessing the shell locally, except prettier. You may have an oppertunity to change your desktop environment; this will have no effect on how you log in, but as you would guess, the GUI will be completely different depending on which environment you choose.
X Server (remotely)
There are ways to connect to the X server remotely, allowing the user to have a GUI session at a remote computer. There is generally very little, if any, reconfiguration of the server required. The easiest way to do this is forward your X session over an SSH connection. You'll need an X-server running on your local machine to do this. There are some available for Windows - the one I use is provided by my school, and it's pretty expensive, so I won't bother mentioning it.
There is an option in the Putty to forward an X session. That's the only thing that needs to be set on the client side.
Adding the command line option -X to the execution of ssh will do the same thing.
Once you're connected to the server and have the X session being forwarded, simply begin executing programs. The X display you're logged into will receive any X windows you open.
VNC is a network protocol for remote GUI sessions. There are lots of different VNC server and viewer applications available, some with additional "non-protocol" features like file transfer or computer-to-computer instant messaging. Each VNC session actually connects to an already running X display session, so it's possible to have as many VNC sessions as you have X sessions. I use VNC because it's pretty easy to configure and I can run the same viewer to connect to Windows or Linux machines. VNC also allows all your applications to continue running after you disconnect without any additional commands - when you reconnect, everything will be just as if you were working at a desktop machine, got up for a while, and then returned.
I'm not going to get into how to configure VNC servers here, that's for a later tutorial. Assuming VNC is set up correctly on the server, all you need is a viewer client - such as Ultra VNC, the host name or IP of the machine you want to connect to, the display you want to connect to, and the password for the VNC connection. Punch all the information into the client and it should work just fine.
Understanding the Linux Filesystem
If you are not used to *nix operating systems, the first difference you will probably notice is the file system. The base of the file system exists at "/" (pronounced "root directory"), not "My Computer" or "C:\". because of this, there is no notion of what physical drive you are working from. You will see why this is an advantage in a later tutorial. You will also notice that there are objects in directories that are not files or subdirectories. There are symbolic links, pointers to physical and logical devices, and all kinds of other things. We'll get to those later, for now, there are several important directories in "/" that you might want to know about. I won't cover them all, and those I cover will not have much depth in their explanation, but it will be more than enough to get you started. For more detailed information and a history on how we got to this standard, see Wikipedia:Filesystem Hierarchy Standard.
Each user has a home directory. By default, a user's home directory is /home/username. The home directory of the user you are logged in as is often referred to simply as '~'. Lots of things are stored in a user's home directory including personal program preferences, application logs, shell logs, and whatever the user decides to put there on their own. Usually, a user's home directory is inaccessible to all other users (except root, but we'll get to that).
This is the root user's home directory. We'll talk about why root is special later. You will have no need to go in this directory at this point.
This is where system wide configuration is stored. I'll talk more about specific files in here in a later tutorial.
User sharable read-only data, such as some program files and source code. You will see many of the same directory names as you would if you were looking at the root of the filesystem in here.
Contains pointers to all devices in the system, both logical and physical - everything from the mouse, to the soundcard, to video capture cards, to hard drives and their partitions. More on why this is great and fun things to do with devices in a later tutorial.
Almost all the executable files are in here. There are also some in /usr/bin/ and a few other places. Anything in this directory can be run without citing a path to the executable. For example, the following 2 commands do pretty much the same thing - copy "hi.txt" to "bye.txt":
[user@computer ~]$ cp hi.txt bye.txt [user@computer ~]$ /bin/cp hi.txt bye.txt
Binaries for system administrators - all the things regular users shouldn't be allowed to do. Like /bin/, there are more system administrator binaries in /usr/sbin/ and some other places. On some distributions, you must use the complete path for these executables, even if you're root - we'll talk about how to get around that in a moment:
[user@computer ~]# /sbin/mdadm
[user@computer ~]# mdadm
Notice the "#" instead of the "$" denoting that we are executing commands as root - I'll get into this more later.
/mnt/ and /media/
This is where removable media is generally mounted. Because Linux doesn't tell you what device you're working from, you can technically mount a device's partition to any directory on the system, however if the system does it automatically (say in the case of a USB drive or CD-ROM) chances are it's in /mnt/ or /media/. This is also a good place to mount things if you want to clearly label what device you're working from, for instance /mnt/cdrom or /mnt/usbdrive. I'll cover mounting and un-mounting in a later tutorial.
Every process running in Linux is run by a user. As you may have guessed, the human user does not initiate all processes. There are other types of users that also run processes. Every user also has a UID, which is a number that uniquely identifies them, a shell that they can log into, and possibly a home directory. This information, and more, is stored as part of each user.
These are the user accounts that actually belong to people who will use the machine. Their home directory is generally /home/username, their UID's are generally over 500, and their shell can be anything. Generally, human users use the /bin/bash shell.
These are accounts for specific purposes or processes. For example, the Apache web server generally runs from user apache. The idea is to give these processes specific access to certain directories and files but not the whole system. In the case of Apache, permissions are given to the root directory of the website and related files - usually /var/www/. Their UIDs are usually less than 500, and they generally do not have a home directory. Their shell is generally /sbin/nologin which means that, although processes can be run as this user and this user can own files, no one can log into graphical or shell modes with this user.
The root user or "super user" is basically the God of the computer. There is nothing root can't do. Root can see all files, change properties of all files, edit all files, create users, manage hardware configurations, manage and install global software, shutdown and restart the system, start/restart/end services, and pretty much anything else you can think of. If you're coming from a Windows background, you can think of root as Administrator on steroids (Administrator can't see other people's files unless he is granted permission to or takes ownership first). Because root is so powerful, it is advised that a strong password be used to protect the account, and that the account is never logged into unless absolutely required. You should never ever log in as root to a graphical interface. Instead, use the su or sudo commands from the terminal to run things as root:
[user@computer ~]$ su Password: [root@computer ~]#
I'll talk about this more later. root's UID is 1, home directory is /root/, and generally uses the standard shell (/bin/bash).
Important Commands and Concepts
The one thing common to all Linux distributions are the basic shell commands. If you're coming from something like Windows, where the shell doesn't do much, this will seem a bit over-complicated, but once you get used to navigating the shell, you'll probably find that it's much more efficient.
It's obviously important to know how to move around the filesystem or you're not going to get very far. I already discussed how the filesystem is designed, now it's time to apply that knowledge.
The working directory is the directory is the directory all relative paths will be calculated from. You can think of it as "the directory I'm working in". For example, if you're working directory is /var/www/html/ and you run:
[user@computer html]$ vim index.htm
The text editor, vim, will open the file /var/www/html/index.htm for editing. (More on text editors later.) Notice the appearacne of the shell - see where it says "html"? Thats the top level of the present working directory. If you're in your home diectory, it will usually display "~".
To change the working directory use the "cd" (Change Directory) command:
[user@computer ~]$ cd /var/www/html [user@computer html]$
To display the complete working directory path use the "pwd" (Present Working Directory) command:
[user@computer html]$ pwd /var/www/html/ [user@computer html]$
Relative Directories and Files
It's often much easier to use relative calls to files or directories. In relative calls ".." refers to the parent directory of the present working directory, "." refers to the present working directory, and "~" refers to your home directory. If you don't prepend your command with a "/" the shell will append to the present working directory. If you do put a "/" the shell will use the absolute path that you enter. You can use this to your advantage like this:
[user@computer html]$ pwd /var/www/html/ [user@computer html]$ cd .. [user@computer www]$ pwd /var/www/ [user@computer www]$ cd ~/docs [user@computer docs]$ pwd /home/user/docs/ [user@computer docs]$ cd ../.. [user@computer home]$ pwd /home/ [user@computer home]$ cd user/docs/.. [user@computer ~]$ pwd /home/user/ [user@computer ~]$ cd /usr/local/bin/ [user@computer bin]$ pwd /usr/local/bin [user@computer bin]$ cd ~ [user@computer ~]$ pwd /home/user [user@computer ~]$ ./somecommand
Notice that to run somecommand, which has its binary in the present working directory, you have to tell the shell to start it in the present working directory. Running the command without "./" will not work and will result in "command not found" error - more on this in a bit.
Unlike Windows, there is no "hidden file attribute" to set. If you want to hide a file, simply prepend it's title with a ".". For instance, the file "/home/user/.thisfile" will be hidden. The same is true of directories. This does not apply so sub-elements of a hidden directory however. If you are currently using a hidden directory as your present workig directory all files not prepended with "." will be visible. More on looking at hidden files in the next section.
Want to know what's in the directory you're working in? How about some other directories? Of course you do, because that's important. There's a few ways to do that.
dir works exactly like the Windows version. Just run it, and you'll get a listing of everything in the present working dir.
ls is way cooler than dir. First off, it color codes everything so you know what is going on, which is great. It's also capable of taking a significant amount of options to get all kinds of important information. ls in its most basic form works like this:
[user@computer somedir]$ ls source destination someotherfile someotherdir [user@computer somedir]$ cd .. [user@computer ~]$ ls somedir source destination someotherfile someotherdir
Notice that if you don't give ls any information about where you want to look, it simply gives information about the present working directory. If you do, it will give you information about whatever directory you ask about.
Now, to display the hidden files with the -a:
[user@computer somedir]$ ls -a . .. source destination someotherfile someotherdir .somehiddendir [user@computer somedir]$ cd .. [user@computer ~]$ ls -a somedir . .. source destination someotherfile someotherdir .somehiddendir
Now, to display the permissions and extended information for the files with the -l option: (we'll cover file permissions in a later tutorial)
[user@computer somedir]$ ls -l drwxr-xr-x 2 user user 48010053 Nov 29 20:38 source drwxrwxrwx 2 root root 42341234 Feb 27 23:22 destination drwxr-xr-x 2 user user 64533 Mar 27 21:41 someotherfile -rw-r--r-- 1 root root 4096 Oct 12 16:04 someotherdir
Both at the same time:
[user@computer somedir]$ ls -al -rw------- 1 user user 4096 Nov 29 20:38 . -rw------- 1 user user 4096 Nov 29 20:38 .. drwxr-xr-x 2 user user 48010053 Nov 29 20:38 source drwxrwxrwx 2 root root 42341234 Feb 27 23:22 destination drwxr-xr-x 2 user user 64533 Mar 27 21:41 someotherfile -rw-r--r-- 1 root root 4096 Oct 12 16:04 someotherdir -rwxrwxrwx 1 user user 4096 Nov 29 18:11 .somehiddendir [user@computer somedir]$
Also, be sure to play with the -s option to show file sizes and -sh to do it in a "human-readable" format.
So it's great to know where all the files are, now it's time to learn how to change their names and locations.
To copy a file, use the cp command, like this:
[user@computer somedir]$ ls source [user@computer somedir]$ cp source destination [user@computer somedir]$ ls source destination [user@computer somedir]$
The ls commands are just to demonstrate the expected output, you don't need to run them - unless you really want to see the result of your work.
To move a file use the mv command, like this:
[user@computer ~]$ ls somedir source [user@computer ~]$ mv somedir/source someotherdir/destination [user@computer ~]$ ls someotherdir destination [user@computer ~]$ ls somedir [user@computer ~]$
A rename operation is basically a move operation, so use the mv command, like this:
[user@computer somedir]$ ls source [user@computer somedir]$ mv source destination [user@computer somedir]$ ls destination [user@computer somedir]$
To delete a file use the rm command, like this:
[user@computer somedir]$ ls source [user@computer somedir]$ rm source [user@computer somedir]$ ls [user@computer somedir]$
rm also takes the -R option that will allow you to remove directories and files recursively (e.g. a directory and all files in it) and that looks like this:
[user@computer ~]$ ls somedir source destination someotherfile someotherdir [user@computer ~]$ rm -R somedir
Now, if you run that command, you'll notice that it prompts you before deleting each file... That's pretty annoying and slow. Try adding the -f option:
[user@computer ~]$ ls somedir source destination someotherfile someotherdir [user@computer ~]$ rm -Rf somedir [user@computer ~]$
Ahh, much better - no prompts. This option will also automatically skip errors - for instance, it won't tell you if the file you're trying to delete already doesn't exist. This is useful in scripting, where a single error might crash your script. More on that in a much much more advanced article.
WARNING: Linux will NOT protect you from destroying your entire file system with rm -rf. Be very careful with this command, especially when running it as root. There is no "trash can" where things go, and it is ridiculously difficult to recover deleted file from Linux filesystems. Do not, under any circumstances, listen to anyone who tells you to run 'rm -rf /'. They are making fun of you - this command will torch every file on your system on all mounted drives. If you do this, let me know so I can laugh for hours.
Most of the Linux configuration is done through text files, so it's important to be able to manipulate text files. This is also how you'll read log files. There are two basic types of editors - the ones for the text environment, and the ones for the graphical environment. There are literally thousands of different text editors available. Most people will pick a text editor and never even consider using something else.
vim is my favorite text editor. It's very straight forward with a lot of neat features - like automatic syntax highlighting. It's command based, so it takes a little while to get used to, but once you get the hang of it you'll find that you are spending time actually doing what you want instead of looking through menus to find the right button to do what you want - much more productive.
To run vim, use the vim command followed by the name of the file you want to edit or create:
[user@computer ~]vim somefile.conf
Vim usually automatically figures out what kind of file you're editing and will highlight the syntax appropriately. It will even highlight some very basic errors. This is a great feature when you're dealing with configuration files or trying to match up the number of "("'s to ")"'s.
Now for the commands:
- You'll notice that when Vim first starts, you won't be able to edit anything. Press the "Insert" key. You'll see that "--INSERT--" appears at the bottom of the screen, indicating normal insert mode is active. Now you can type normally. Press "Insert" again to go to "--REPLACE--" mode. This will cause text to be overwritten with what you type. "Insert" will toggle between these two modes. To exit edit mode, press "Esc".
- When you're not in edit mode you will be able to execute commands. Commands usually start with the ":" character, so don't forget to put that in. Some important commands:
- Quit - ":q"
- Quit without saving - ":q!"
- Save - ":write"
- Save and quit - ":wq"
- Visual select mode - "v" - allows you to select text for cut/copy/delete
- Cut - "x" - do visual selection first
- Copy (yank) - "y" - do visual selection first
- Paste - "p"
- Delete currently selected - "del"
- Undo - "u" - undoes the last edit
That should get you started. Vim also supports things like search, text replacement, undo "trees" and "branches", sorting, automatic formatting, editing multiple files concurrently, displaying multiple files concurrently, crash recovery auto-save, and advanced scripting - it has a lot of functionality. See the Vim documentation for further information.
There are tons of graphical editors and different ones are packaged with different distributions. They're generally pretty straight forward. The one I most commonly use is gedit which is Gnome's simple text editor. I use vim a lot more than gedit.
The "su" command
The su command allows you to execute commands as other users. By default, just entering su will allow you to run commands as root:
[user@computer ~]$ su Password: [root@computer ~]# exit [user@computer ~]$
You know that you're acting as root when the "$" changes to a "#" and the user displayed is "root". To exit root mode back to your regular user, simply type exit. Remember, root is very powerful, so make sure you never ever log in as root unless you know what you're doing. Instead, always use the su command from your regular account, execute what you need, then exit back to normal mode.
su also allows you to run commands as any user on the system:
[user1@computer ~]$ su user2 Password: [user2@computer ~]$ exit [user1@computer ~]$
To run commands as another user, you will need that user's password, or you can just log in as root first - like this:
[user1@computer ~]$ su Password: [root@computer ~]# su user2 [user2@computer~]$ exit [root@computer ~]# exit [user1@computer~]$
Now you have to type exit once to get back to root mode, and again to get back to the original user.
su also has some argument that can be passed. The most interesting is the "-l" argument for "login". This will actually set up the PATH variable for the user you're running commands as. For instance, root often has "/sbin/" in its PATH, where most users won't and shouldn't. This is very useful for system administrators to be able to test configurations of their users. More about PATH and how to look at it below.
The "sudo" command
sudo allows you to run commands as root without logging in as root and without knowing the root password. This is kind of handy. Most distributions will automatically add your user to the "sudo-ers" list, so you can go ahead and use it right away. Some distributions hide the root password from you completely, making sudo the only way to run commands as root.
To use sudo, simply put the command you want to run as root after sudo. You'll be asked to enter your password (not the root password) and then the command will run. Your password is usually stored for a period of time, so you won't have to reenter it for each command.
[user@computer ~]$ sudo vim somefile Password:
This runs the vim text editor, as root, on "somefile". You can run any command you want using sudo.
When you type in a command, Linux looks in the directories specified in the PATH variable to find that command. Generally, you don't have to worry about the PATH, but it's good to know about what's going on. To see your PATH:
[user@computer ~]$ echo $PATH /usr/local/bin /bin /usr/bin [user@computer ~]$
You will probably have more entries in your PATH than this. So long as the binary for the command you want to run is in one of those directories listed in PATH, you can call it from any working directory. If you have commands elsewhere, you have to call them with an absolute call. For example:
[user@computer someDir]$ ls aCommand [user@computer someDir]$ aCommand aCommand : command not found [user@computer someDir]$ ./aCommand This is what aCommand does. aCommand has completed. [user@computer someDir]$
I talked about this earlier when I discussed relative directories. It's possible for different users to have different PATH variables. Some elements of the PATH can be generated dynamically at logon time. I'll cover changing the PATH for a single session, changing the PATH permenantly for a single user, and how to dynamically set the PATH at logon in a more advanced tutorial.
Have no idea what a command does or how to use it? There's commands for that.
Man pages are a great source of information. Generally, whenever you install an application some man pages will come with it explaining how to use the program. To get to them you can generally do something as simple as this:
[user@computer someDir]$ man theCommandYouNeedHelpWithHere
Man pages are divided into sections, numbered 1 through 9:
- General Commands
- System Calls
- C Library Functions
- Special Files
- File Formats
- Games and Screensavers
- System Administration Commands
- Kernel routines
To select what section you want to view use the "-s" option, like so:
[user@computer someDir]$ man -s 3 sprintf
This generally isn't required unless you're having trouble getting the correct man page to display.
Usually you have to use the "q" key to exit man.
Info is much like man except the interface is different. Many applications will write only info or only man pages, so if you can't find help through one, try the other.
Many commands will have a help parameter which will show you some simple useage information. Usually it's "-h", "-help", or "--help":
[user@computer someDir]$ somecommand --help somecommand [arg1] [arg2] v 0.0 This is a completely made up command that does nothing at all. [arg1] A file [arg2] A different file --help Display this help information --fake This doesn't do anything [user@computer someDir]$
They're generaly much more useful than this example which I made up in less than 60 seconds.
So, that's how it works. Good luck, have fun.
- Migrating to Linux - A how-to guide for making a dual-boot Windows/Linux system with the ability to run Windows apps in Linux using VMWare Server (using only freely available software except Windows).
- Linux Tutorial Part 2 (Intermediate Topics)
- Linux Tutorial Part 3 (Advanced Topics)
- Linux Tutorial Part 4 (Development Topics)