fbpx

Three reasons why you should learn Bash as a Windows engineer in 2021

There are plenty of reasons for you to look into learning Linux, as a matter of fact. Let us look at three strong reasons..

1. Linux can now be installed inside of Windows, as WSL (Windows Subsystem for Linux). And that comes with Bash. The use of Bash for automation of system services and import/export of data is very strong.

2. Do you want to work with cloud engineering, with a cloud provider such as AWS or Azure? Well, the responsibilities of IT engineers are increasingly crossing the borders of operating systems, and Cloud Engineer roles are a good example of this trend. Because when the focus becomes on running virtual machines in the cloud (with one of the cloud providers), the focus shifts from knowing one operating system, to knowing how to work on the commandline for several operating systems. Batch in Windows will not get you far as it is rather limited. Also, PowerShell was replaced by PowerShell Core and the future is not as certain as.. Bash, which on the other hand, is used in a majority of the web servers you can find.

3. By learning how to code in Bash, you become familiar with the concept of orchestrating the use of Linux commandline tools in a procedural way. It also use functions and with the combination of that with pipes to string command output together, you have a strong base for automation of system administration tasks.

Stay tuned for more insights and tips! Also check out Learn Linux as a Windows engineer.

5 Linux network commands that are similar or the same as in the Windows shell – Part 2

In the first part of this blog series, we did a round-up of five common Linux network commands and their equivalent commands in Windows. In this part we will continue with the five described Linux commands for networking, and we will automate the usage of these with Bash scripts.

The commands we will cover are netstat, nslookup, ping, traceroute and curl. As you may be familiar with, these commands have a range of different switches that can be activated when running the commands. By using Bash, we will automate the running of the commands and the output they generate. We will also use the read Bash built-in command to create simple user interfaces where we request input from the keyboard. In one of our examples we will use a handy while loop in combination with the read command to read all lines in a file. That is a code snippet you will definitely have use of in the future.

Let’s get to it!

netstat for network connections

So netstat can show you network connections and the routing table as well as network interface statistics. But let’s say we want to narrow down our search for network connections in listening mode, and only those that were opened by users on the local Linux box. We can do that with the -l switch in combination with a grep command to filter for the connections that have the path “user” (in ‘run’ directory on Debia-based Linux). This way we get a neat table as output which shows all listening network connections initiated by users.

Check out the code here..

netstat -l | grep user

Above we pipe the output from netstat to grep and only match for entries in the output that contain “user”.

Checking listening network connections for a specific user

Let’s move on to the next netstat Bash example. We are going to build on the last command snippet. Now we want to narrow down the search even more, to only show listening network connections that were opened by a specific user ID. Recall that all users in a Linux system have numeric user IDs associated with them. If you want to learn more about Linux user IDs, please read “What is a Linux UID?”.

Now let us look at the Bash script.

read -p "Which user ID? " usr
netstat -l | grep user | grep $usr

As you can see, the second line is almost identical to the one in the last section. The difference is that we have added another grep statement, and matching for the contents in the variable $usr. The first line is where we define the $usr variable, and we do that by reading from the keyboard with the Bash built-in read.

The result is a pretty useful, although simple, Bash script. We ask the user to enter a UID, and then we search for all listening network connections and grep for only user initated connections, and finally for the specific UID.

But netstat -l is not fool-proof!

You need to know that the solution above is not fool-proof, because we are using predetermined matching conditions and we cannot guarantee that there will not be false positives. However, although the output may contain some extra lines for matching (listening) network connections, we can be sure that all the specific listening network connections for a specific UID will be displayed.

netstat for TCP and UDP connections

Now it’s time to look further and automate netstat even more. In this scenario we will consider when we want to check the listening and connected network sockets for both TCP and UDP. We will also want to map the network sockets to the PID (program name) that opened the network sockets.

netstat -tulpn

nslookup to look up domain names

Our first nslookup example will ask for a domain name and lookup the associated A record for it.

read -p "Enter domain: " domain
nslookup $domain

As you can see, we are reading the domain name from the keyboard and simply passing that variable value to nslookup.

nslookup for several domains on the same line

Let’s say you want to get A-records for several domains in one go. Well, since the internal field separator (IFS) is set to carriage return, you can use space to separate several values to assign to variables. The read command will read the input until it encounters a newline character. So the command really reads words, not lines. Like this..

read -p "Enter domains: " domain domain2

nslookup $domain
nslookup $domain2

We assign the variable values to each variable in turn, and then we just run nslookup twice, once for each variable. This was simple enough.

nslookup can do more

nslookup can look up all sorts of domain records, such as the name servers responsible for specific A-domains (records), and the email servers (MX records). Check out the following example..

read -p "Enter domain: " domain

nslookup -type=MX $domain
nslookup -type=NS $domain

Here we read one domain from the keyboard and assign it to a variable, noting new here. But check out the two nslookup commands. We use the -type switch to specify which type of domain records we are looking for, for the specific domain. MX stands for Mail Exchange, for routing emails. And NS stands for Nameserver, for the authoritative domain server(s) for the specific domain.

curl to get web page contents

curl is in its generic form a tool to transfer data to and from servers. It is often used to transfer HTTP data. Here we will see how to get the web page headers for a domain.

curl -I https://www.google.com

The -I switch tells curl to fetch the headers only. This is perhaps the simplest of all curl examples. Be aware that it supports a wide range of protocols, as follows (DICT, FILE, FTP, FTPS, GOPHER, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP).

curl to fetch a user-specified web page

So let us continue by using curl to fetch an entire web page. This is the default mode when you use curl without switches.

read -p "Domain name? " domain

curl $domain

Like before, we ask the user to submit a domain name (should be prepended with HTTP or HTTPS). Then we simply invoke curl for the read variable, and we will see the entire web page printed to stdout.

curl to fetch and save web pages

Finally, let us modify the previous example and save the output to a file. Like this..

read -p "Domain name? " domain

curl $domain -o output.txt

That’s it! To understand the usefulness of curl, I highly recommend that you run man curl and read about all the available options.

You can find the GitHub repository with the simple examples here.

Author: Paul-Christian Markovski, for NailLinuxExam.com.

5 Linux network commands that are similar or the same as in the Windows shell – Part 1

Let us take a look at the most common network commandline utilities which exist both in Linux and in Windows. We can see that the names for the commands almost all the same. It makes sense as these tools are really part of the daily toolset any IT engineer will use, whether you are working in Linux or Windows.

traceroute (tracert in Windows)

The handy tracert that is familiar from Windows-environments has its equivalent in traceroute. You should be aware of the extra protocols that you can use for tracing routes with traceroute. Furthermore, in traceroute you can use TCP SYN or ICMP ECHO for probes. You can also select a custom port to use.

ping (ping in Windows)

Check the availability of a node with ping in both Linux and Windows. This might very well be one of the simplest tools when checking if a network endpoint is available.

netstat (netstat in Windows)

Use netstat to check all the network connections. Moreover, you can check both listening and established ports on the local node. netstatcan be used with a range of options in both Linux and Windows. For instance, you have the option to check routing tables for OSI layer 2 information and multicast memberships. Additionally it can be argued that the Linux command ss to investigate sockets almost has replaced netstat. But for the purpose of daily network tasks, netstat is still very useful in Linux distributions.

nslookup (nslookup in Windows)

Use nslookup to query Internet name servers in both Linux and Windows. Although dig was favoured over nslookup for some time, nslookup is still a very important tool in the toolchest. It is worth mentioning that dig has more options. Consequently it is great for wrapping the command in Bash scripts.

curl (curl in Windows)

curl or Client URL has been around since 1998 and was the successor to HttpGet. This is a robust tool to transfer data over networks, with its support for a number of protocols (HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, DICT, TELNET, LDAP).

This completes this part one of our round-up of five common Linux network commands with their Windows equivalents. In the next part of this blog post series, we will automate these commands in Linux with Bash scripts.

Author: Paul-Christian Markovski, for NailLinuxExam.com.

What’s the scoop on DevOps and database administration?

Data and databases are at the heart of any business endeavour to work with DevOps, development and operations working closer to one another and embracing automation. Data equal business value and it has to be safeguarded against corruption as well as data loss.

When DBMaestro reached out to 244 IT professionals they learned plenty about the way of working with databases in DevOps oriented workplaces. The top three risks reported for when deploying database changes were: Downtime, Performance impact, and Database / application crash.

But this then leads us to the question: What are the top three reasons for errors when making database changes? Well, based on the survey responses it turns out that accidental overrides, invalid code and conflicts between different teams are the three major sources of errors.

Speaking with CTO and co-founder Yaniv Yehuda from DBMaestro, we get some more insight into how these errors occur.

“Overrides of database objects – such as procedures or functions etc, when multiple people are accessing and introducing changes at the same time, on a shared database. For example – a developer starts to fix a bug in a database package, while the DBA adds another piece of code to that package. The last one to introduce the change, will override the previous one.

This was a challenge for code development decades ago, and was solved by introducing a check out/in process, and revision management. The problem still persists for shared database development.” When it comes to recovery times, most errors are fixed within one hour (51%), closely followed by errors that are handled in 1-5 hours.

DBAs are still mostly in control, but the balance is shifting!

Also, DBAs are still ruling their databases and 59% report that database changes are made by the DBA, versus 20% reporting that the DevOps can do the same. This does of course not mean that you as a Linux engineer should refrain from learning about database maintenance. Especially so since DBAs may be the bottleneck in critical business workflows. I wonder whether DBAs turn out to be bottlenecks in rapid Kanban/SCRUM workflows and on this topic, Yaniv Yehuda from DB Maestro adds “The fact that the process is being done by a person – adds overhead to an automated process.”

He continues with a real life example: “In large enterprises, if a developer wants to change something in the DB, he submits a request ticket – and the DBA takes it forward. That process can take days (just because people are busy…). Whenever you get that process automated, and each developer can do what he is entitled to – he can push several changes a day, without waiting for anyone else.

DBA may be required to review changes, but that should but be at the stage that a developer is striving to be agile. Errors – they can rise from the manual nature of the process (forgot to do something before or after, connected to the wrong environment, run the wrong script etc etc)”

Automation still an ongoing effort

The survey also showed that database scripts are mostly used to make changes to databases (51%), followed by build/submit scripts using automation tools (34%). The survey was based on a group of 244 IT professionals from around the world, ranging from CTOs to DBAs.

/ P-C Markovski