fbpx

AWS introduces a new charge for IPv4 addresses, do you know how to check your own IPv4 and IPv6 configuration?

The IPv4 address pool has decreased over the years, and as reported by AWS the cost to acquire one IPv4 address has increased by 300% the last five years1. Because of this and the gradual transition to IPv6, AWS is introducing a new charge, $0.005 per IPv4 address per hour for all public IPv4 addresses.

As a fun perspective, check out what IP address you are using on your own Linux OS (not necessarily Linux running in AWS).

ifconfig -a | egrep "inet|inet6"

The output will show you the filtered output from ifconfig, with the IPv4 (inet) and IPv6 (inet6) addresses that are configured.

  1. See the AWS News Blog. ↩︎

The Linux command line from a Return of Investment (ROI) perspective AND as a tool to work with data and files

You know, many IT professionals likely meet the following realities in their exposure to Linux:

  • Chances are you will work with files on an existing Linux system. In Linux, all configurations can be traced back to plain text files. Yes, if you work on Windows you also work with files, but maybe working more on the command line when in Linux). So you need to know how to work with those text files.
  • Moreover, in your daily work with text files – essentially data – you will encounter that because of business and IT reasons, data files will be shared in ASCII formats, such as CSV, JSON, YAML, Syslog (RFC 5424), and more. Therefore, you need to know how to import, export, manipulate, and interpret files in these particular formats.
  • Chances are you will not install any system software. So why even bother learning how to do that [rhetorical question]? This might sound awkward to many IT engineers, but there is a range of IT professions where there is no need to change the system configuration of an existing system. Instead, these IT professionals need to use the computer system to work with data, not with software application management.

You can call the three points above an argument that includes three premises. The conclusion that follows in the purpose of this website, Nail Linux Exam. There are many (too many) Linux certifications out there. How can you know which one to pick? Which one is most respected in the industry you are in? Which one is a fad? So which one is more cost effective for your particular situation?

Nail Linux Exam aims to help you with these questions. This is not just another website promising to help you pass Linux exam X. No, this site takes a more holistic approach to Linux skills and exams, tailored to the needs of professionals working on data that happens to reside or pass through a Linux system. And we don’t care about that the system is configured correctly, we take that for granted!

If you agree with the statements above, then NailLinuxExam.com is likely a useful tool on your Linux command line learning path.

Reap the highest possible benefit as fast as possible

So what you want is to invest a) as little time as possible, b) reap the highest possible benefit (useful Linux command line knowledge), and c) have a solid foundation for passing the basics of any Linux certification exam.

The common denominator here is The file system and the commands to work with files and the filesystem without configuring the system itself.

Linux basics on the command line

This site and its quizzes are for you who want to learn how to work with Linux on the basics – That means files, importing/exporting data, and relational databases running on Linux. If we glance back at those three bullet points above, you can see that we will completely avoid installing and maintaining system software (whether they are web servers, proxy servers, email servers, or whatever have you). Because of the Linux paradigm, everything is a file in Linux, we will learn how to work with files and their contents in a Linux environment. This website does not compete, condone or support any of the well-known Linux certifications on the market. They all touch upon the Linux command line and how to work on it, and that is also the foundation for this site. Learn how to work on the command line, with a few role-specific entrances: The business professional, the data scientist, and the Windows engineer.

5 Linux network commands that are similar or the same as in the Windows shell – Part 1

Let us take a look at the most common network commandline utilities which exist both in Linux and in Windows. We can see that the names for the commands almost all the same. It makes sense as these tools are really part of the daily toolset any IT engineer will use, whether you are working in Linux or Windows.

traceroute (tracert in Windows)

The handy tracert that is familiar from Windows-environments has its equivalent in traceroute. You should be aware of the extra protocols that you can use for tracing routes with traceroute. Furthermore, in traceroute you can use TCP SYN or ICMP ECHO for probes. You can also select a custom port to use.

ping (ping in Windows)

Check the availability of a node with ping in both Linux and Windows. This might very well be one of the simplest tools when checking if a network endpoint is available.

netstat (netstat in Windows)

Use netstat to check all the network connections. Moreover, you can check both listening and established ports on the local node. netstatcan be used with a range of options in both Linux and Windows. For instance, you have the option to check routing tables for OSI layer 2 information and multicast memberships. Additionally it can be argued that the Linux command ss to investigate sockets almost has replaced netstat. But for the purpose of daily network tasks, netstat is still very useful in Linux distributions.

nslookup (nslookup in Windows)

Use nslookup to query Internet name servers in both Linux and Windows. Although dig was favoured over nslookup for some time, nslookup is still a very important tool in the toolchest. It is worth mentioning that dig has more options. Consequently it is great for wrapping the command in Bash scripts.

curl (curl in Windows)

curl or Client URL has been around since 1998 and was the successor to HttpGet. This is a robust tool to transfer data over networks, with its support for a number of protocols (HTTP, HTTPS, FTP, FTPS, SCP, SFTP, TFTP, DICT, TELNET, LDAP).

This completes this part one of our round-up of five common Linux network commands with their Windows equivalents. In the next part of this blog post series, we will automate these commands in Linux with Bash scripts.

Author: Paul-Christian Markovski, for NailLinuxExam.com.

What’s the scoop on DevOps and database administration?

Data and databases are at the heart of any business endeavour to work with DevOps, development and operations working closer to one another and embracing automation. Data equal business value and it has to be safeguarded against corruption as well as data loss.

When DBMaestro reached out to 244 IT professionals they learned plenty about the way of working with databases in DevOps oriented workplaces. The top three risks reported for when deploying database changes were: Downtime, Performance impact, and Database / application crash.

But this then leads us to the question: What are the top three reasons for errors when making database changes? Well, based on the survey responses it turns out that accidental overrides, invalid code and conflicts between different teams are the three major sources of errors.

Speaking with CTO and co-founder Yaniv Yehuda from DBMaestro, we get some more insight into how these errors occur.

“Overrides of database objects – such as procedures or functions etc, when multiple people are accessing and introducing changes at the same time, on a shared database. For example – a developer starts to fix a bug in a database package, while the DBA adds another piece of code to that package. The last one to introduce the change, will override the previous one.

This was a challenge for code development decades ago, and was solved by introducing a check out/in process, and revision management. The problem still persists for shared database development.” When it comes to recovery times, most errors are fixed within one hour (51%), closely followed by errors that are handled in 1-5 hours.

DBAs are still mostly in control, but the balance is shifting!

Also, DBAs are still ruling their databases and 59% report that database changes are made by the DBA, versus 20% reporting that the DevOps can do the same. This does of course not mean that you as a Linux engineer should refrain from learning about database maintenance. Especially so since DBAs may be the bottleneck in critical business workflows. I wonder whether DBAs turn out to be bottlenecks in rapid Kanban/SCRUM workflows and on this topic, Yaniv Yehuda from DB Maestro adds “The fact that the process is being done by a person – adds overhead to an automated process.”

He continues with a real life example: “In large enterprises, if a developer wants to change something in the DB, he submits a request ticket – and the DBA takes it forward. That process can take days (just because people are busy…). Whenever you get that process automated, and each developer can do what he is entitled to – he can push several changes a day, without waiting for anyone else.

DBA may be required to review changes, but that should but be at the stage that a developer is striving to be agile. Errors – they can rise from the manual nature of the process (forgot to do something before or after, connected to the wrong environment, run the wrong script etc etc)”

Automation still an ongoing effort

The survey also showed that database scripts are mostly used to make changes to databases (51%), followed by build/submit scripts using automation tools (34%). The survey was based on a group of 244 IT professionals from around the world, ranging from CTOs to DBAs.

/ P-C Markovski