fbpx

The Linux command line from a Return of Investment (ROI) perspective AND as a tool to work with data and files

You know, many IT professionals likely meet the following realities in their exposure to Linux:

  • Chances are you will work with files on an existing Linux system. In Linux, all configurations can be traced back to plain text files. Yes, if you work on Windows you also work with files, but maybe working more on the command line when in Linux). So you need to know how to work with those text files.
  • Moreover, in your daily work with text files – essentially data – you will encounter that because of business and IT reasons, data files will be shared in ASCII formats, such as CSV, JSON, YAML, Syslog (RFC 5424), and more. Therefore, you need to know how to import, export, manipulate, and interpret files in these particular formats.
  • Chances are you will not install any system software. So why even bother learning how to do that [rhetorical question]? This might sound awkward to many IT engineers, but there is a range of IT professions where there is no need to change the system configuration of an existing system. Instead, these IT professionals need to use the computer system to work with data, not with software application management.

You can call the three points above an argument that includes three premises. The conclusion that follows in the purpose of this website, Nail Linux Exam. There are many (too many) Linux certifications out there. How can you know which one to pick? Which one is most respected in the industry you are in? Which one is a fad? So which one is more cost effective for your particular situation?

Nail Linux Exam aims to help you with these questions. This is not just another website promising to help you pass Linux exam X. No, this site takes a more holistic approach to Linux skills and exams, tailored to the needs of professionals working on data that happens to reside or pass through a Linux system. And we don’t care about that the system is configured correctly, we take that for granted!

If you agree with the statements above, then NailLinuxExam.com is likely a useful tool on your Linux command line learning path.

Reap the highest possible benefit as fast as possible

So what you want is to invest a) as little time as possible, b) reap the highest possible benefit (useful Linux command line knowledge), and c) have a solid foundation for passing the basics of any Linux certification exam.

The common denominator here is The file system and the commands to work with files and the filesystem without configuring the system itself.

Linux basics on the command line

This site and its quizzes are for you who want to learn how to work with Linux on the basics – That means files, importing/exporting data, and relational databases running on Linux. If we glance back at those three bullet points above, you can see that we will completely avoid installing and maintaining system software (whether they are web servers, proxy servers, email servers, or whatever have you). Because of the Linux paradigm, everything is a file in Linux, we will learn how to work with files and their contents in a Linux environment. This website does not compete, condone or support any of the well-known Linux certifications on the market. They all touch upon the Linux command line and how to work on it, and that is also the foundation for this site. Learn how to work on the command line, with a few role-specific entrances: The business professional, the data scientist, and the Windows engineer.

Three reasons why you should learn Bash as a Windows engineer in 2021

There are plenty of reasons for you to look into learning Linux, as a matter of fact. Let us look at three strong reasons..

1. Linux can now be installed inside of Windows, as WSL (Windows Subsystem for Linux). And that comes with Bash. The use of Bash for automation of system services and import/export of data is very strong.

2. Do you want to work with cloud engineering, with a cloud provider such as AWS or Azure? Well, the responsibilities of IT engineers are increasingly crossing the borders of operating systems, and Cloud Engineer roles are a good example of this trend. Because when the focus becomes on running virtual machines in the cloud (with one of the cloud providers), the focus shifts from knowing one operating system, to knowing how to work on the commandline for several operating systems. Batch in Windows will not get you far as it is rather limited. Also, PowerShell was replaced by PowerShell Core and the future is not as certain as.. Bash, which on the other hand, is used in a majority of the web servers you can find.

3. By learning how to code in Bash, you become familiar with the concept of orchestrating the use of Linux commandline tools in a procedural way. It also use functions and with the combination of that with pipes to string command output together, you have a strong base for automation of system administration tasks.

Stay tuned for more insights and tips! Also check out Learn Linux as a Windows engineer.

5 Linux network commands that are similar or the same as in the Windows shell – Part 2

In the first part of this blog series, we did a round-up of five common Linux network commands and their equivalent commands in Windows. In this part we will continue with the five described Linux commands for networking, and we will automate the usage of these with Bash scripts.

The commands we will cover are netstat, nslookup, ping, traceroute and curl. As you may be familiar with, these commands have a range of different switches that can be activated when running the commands. By using Bash, we will automate the running of the commands and the output they generate. We will also use the read Bash built-in command to create simple user interfaces where we request input from the keyboard. In one of our examples we will use a handy while loop in combination with the read command to read all lines in a file. That is a code snippet you will definitely have use of in the future.

Let’s get to it!

netstat for network connections

So netstat can show you network connections and the routing table as well as network interface statistics. But let’s say we want to narrow down our search for network connections in listening mode, and only those that were opened by users on the local Linux box. We can do that with the -l switch in combination with a grep command to filter for the connections that have the path “user” (in ‘run’ directory on Debia-based Linux). This way we get a neat table as output which shows all listening network connections initiated by users.

Check out the code here..

netstat -l | grep user

Above we pipe the output from netstat to grep and only match for entries in the output that contain “user”.

Checking listening network connections for a specific user

Let’s move on to the next netstat Bash example. We are going to build on the last command snippet. Now we want to narrow down the search even more, to only show listening network connections that were opened by a specific user ID. Recall that all users in a Linux system have numeric user IDs associated with them. If you want to learn more about Linux user IDs, please read “What is a Linux UID?”.

Now let us look at the Bash script.

read -p "Which user ID? " usr
netstat -l | grep user | grep $usr

As you can see, the second line is almost identical to the one in the last section. The difference is that we have added another grep statement, and matching for the contents in the variable $usr. The first line is where we define the $usr variable, and we do that by reading from the keyboard with the Bash built-in read.

The result is a pretty useful, although simple, Bash script. We ask the user to enter a UID, and then we search for all listening network connections and grep for only user initated connections, and finally for the specific UID.

But netstat -l is not fool-proof!

You need to know that the solution above is not fool-proof, because we are using predetermined matching conditions and we cannot guarantee that there will not be false positives. However, although the output may contain some extra lines for matching (listening) network connections, we can be sure that all the specific listening network connections for a specific UID will be displayed.

netstat for TCP and UDP connections

Now it’s time to look further and automate netstat even more. In this scenario we will consider when we want to check the listening and connected network sockets for both TCP and UDP. We will also want to map the network sockets to the PID (program name) that opened the network sockets.

netstat -tulpn

nslookup to look up domain names

Our first nslookup example will ask for a domain name and lookup the associated A record for it.

read -p "Enter domain: " domain
nslookup $domain

As you can see, we are reading the domain name from the keyboard and simply passing that variable value to nslookup.

nslookup for several domains on the same line

Let’s say you want to get A-records for several domains in one go. Well, since the internal field separator (IFS) is set to carriage return, you can use space to separate several values to assign to variables. The read command will read the input until it encounters a newline character. So the command really reads words, not lines. Like this..

read -p "Enter domains: " domain domain2

nslookup $domain
nslookup $domain2

We assign the variable values to each variable in turn, and then we just run nslookup twice, once for each variable. This was simple enough.

nslookup can do more

nslookup can look up all sorts of domain records, such as the name servers responsible for specific A-domains (records), and the email servers (MX records). Check out the following example..

read -p "Enter domain: " domain

nslookup -type=MX $domain
nslookup -type=NS $domain

Here we read one domain from the keyboard and assign it to a variable, noting new here. But check out the two nslookup commands. We use the -type switch to specify which type of domain records we are looking for, for the specific domain. MX stands for Mail Exchange, for routing emails. And NS stands for Nameserver, for the authoritative domain server(s) for the specific domain.

curl to get web page contents

curl is in its generic form a tool to transfer data to and from servers. It is often used to transfer HTTP data. Here we will see how to get the web page headers for a domain.

curl -I https://www.google.com

The -I switch tells curl to fetch the headers only. This is perhaps the simplest of all curl examples. Be aware that it supports a wide range of protocols, as follows (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP).

curl to fetch a user-specified web page

So let us continue by using curl to fetch an entire web page. This is the default mode when you use curl without switches.

read -p "Domain name? " domain

curl $domain

Like before, we ask the user to submit a domain name (should be prepended with HTTP or HTTPS). Then we simply invoke curl for the read variable, and we will see the entire web page printed to stdout.

curl to fetch and save web pages

Finally, let us modify the previous example and save the output to a file. Like this..

read -p "Domain name? " domain

curl $domain -o output.txt

That’s it! To understand the usefulness of curl, I highly recommend that you run man curl and read about all the available options.

You can find the GitHub repository with the simple examples here.

Author: Paul-Christian Markovski, for NailLinuxExam.com.

Let’s setup a Git repository

The sooner you get used to storing your Bash scripts in Git, the better. Version control of your code simplifies future improvements and you can keep track of all changes in an organized fashion. It also allows your peers and colleagues to read the logic you used when you developed your scripts.

Let’s setup your first Git repo:

Enter the directory where the repository will be located.

  • Run git init
  • Copy your source code file(s) (or write a first file) into this directory.
  • Run git add *
  • Run git commit -m “Enter information about the script and any changes here."

That’s it! You just created your first Git repository. Let’s take a closer look at the directory .git that was created when you ran git init. We will look at some very important files and directories, to quickly understand the basics of Git.

description – This file contains a text summary of the project. Enter some text at will.

config – This file contains settings for the project. Here you will find Boolean variables and numeric values.

hooks – This directory contains Shell scripts that run before or after events, for instance commit or add. When you first create a Git repository you will find sample scripts here, for commit, push, rebase etc.

objects – This is where Git stores the internal data for the objects stored in the repository. This is the “useful” data, your source code.

There are several more files and directories, but the above cover the basics fairly well. Over time you will work extensively with Git hooks to customize triggers during Git events.

Well done. You have just started using Git. The first and most important step is done. Now you can start looking into the Git commands by typing git help. Hint: You want to learn how to use Git hooks to automate steps in relation to Git events.