What’s the scoop on DevOps and database administration?

Data and databases are at the heart of any business endeavour to work with DevOps, development and operations working closer to one another and embracing automation. Data equal business value and it has to be safeguarded against corruption as well as data loss.

When DBMaestro reached out to 244 IT professionals they learned plenty about the way of working with databases in DevOps oriented workplaces. The top three risks reported for when deploying database changes were: Downtime, Performance impact, and Database / application crash.

But this then leads us to the question: What are the top three reasons for errors when making database changes? Well, based on the survey responses it turns out that accidental overrides, invalid code and conflicts between different teams are the three major sources of errors.

Speaking with CTO and co-founder Yaniv Yehuda from DBMaestro, we get some more insight into how these errors occur.

“Overrides of database objects – such as procedures or functions etc, when multiple people are accessing and introducing changes at the same time, on a shared database. For example – a developer starts to fix a bug in a database package, while the DBA adds another piece of code to that package. The last one to introduce the change, will override the previous one.

This was a challenge for code development decades ago, and was solved by introducing a check out/in process, and revision management. The problem still persists for shared database development.” When it comes to recovery times, most errors are fixed within one hour (51%), closely followed by errors that are handled in 1-5 hours.

DBAs are still mostly in control, but the balance is shifting!

Also, DBAs are still ruling their databases and 59% report that database changes are made by the DBA, versus 20% reporting that the DevOps can do the same. This does of course not mean that you as a Linux engineer should refrain from learning about database maintenance. Especially so since DBAs may be the bottleneck in critical business workflows. I wonder whether DBAs turn out to be bottlenecks in rapid Kanban/SCRUM workflows and on this topic, Yaniv Yehuda from DB Maestro adds “The fact that the process is being done by a person – adds overhead to an automated process.”

He continues with a real life example: “In large enterprises, if a developer wants to change something in the DB, he submits a request ticket – and the DBA takes it forward. That process can take days (just because people are busy…). Whenever you get that process automated, and each developer can do what he is entitled to – he can push several changes a day, without waiting for anyone else.

DBA may be required to review changes, but that should but be at the stage that a developer is striving to be agile. Errors – they can rise from the manual nature of the process (forgot to do something before or after, connected to the wrong environment, run the wrong script etc etc)”

Automation still an ongoing effort

The survey also showed that database scripts are mostly used to make changes to databases (51%), followed by build/submit scripts using automation tools (34%). The survey was based on a group of 244 IT professionals from around the world, ranging from CTOs to DBAs.

/ P-C Markovski

So, Linux engineer, you found yourself in a DevOps “experiment”?

DevOps is a popular trend these days and many Linux engineers (and other engineers) find themselves caught up in the hype. As companies scramble to combine work efforts by Development and Operations teams, the cultural changes that are introduced are huge.

But if you hear the word “experiment” in the midst of the DevOps changes you see around you, first challenge the idea of the DevOps experiment. Let us help you as we look into what constitutes an “experiment”.

To begin, let’s see what Dictionary.com says about this word:

“A test, trial, or tentative procedure; an act or operation for the purpose of discovering something unknown or of testing a principle, supposition, etc.:

The verb goes on to describe the action to experiment: “To try or test, especially in order to discover or prove something

Let’s not stop here! Let’s take a look at what BusinessDictionary.com says, to get a definition that is viable in a business environment:

[An experiment is a] research method for testing different assumptions (hypotheses) by trial and error under conditions and controlled by the researcher.

So, it seems there is an assumption involved? Surely your business has presented their assumption? Also, the experiments undertaken are to check hypotheses by controlled trial and error, controlled by the researcher.

So ask yourself: Is the experiment in your outfit using controlled trial and error in order to discover or prove something? Has the experiment been described with clear premises and a conclusion that is to be tested?

If not, then.. If the DevOps experiment in your team does not meet the description of experiment above, then it is highly probably that the decision makers in your outfit has some homework to do. And furthermore that they are delegating the business research and decision-making to you as an engineer, developer or tester.

Is that okay with you? Please think about the implications of this. And ask yourself: Are you getting credit (and paid) for doing business research?

The takeaway here is: Whenever you are about to be involved in an “experiment” in a business setting, do ask for a definition of the word in the context of the business. Find out what the underlying reason is to undertake the “experiment”.

Some reasons for DevOps experiments with no clear assumption or hypothesis..

  • Companies try to squeeze many job roles into one individual. This is to save money.
  • SCRUM and agile principles are adopted by companies without doing case studies to see if they really need them.
  • The CI/CD market is constantly evolving, tied to DevOps in the immaturity and interpreted differently by different companies.

Hopefully this blog post helps you to analyse your own situation in the DevOps experiment you are currently in.

Debian or Ubuntu, where did they go different ways?

Most new Linux users soon learn that Ubuntu is based on Debian and leave it at that. Depending on personality the choice is then to go with Debian or Ubuntu. But how are these two distros really different from one another? Let’s take a look at a few important differences and similarities.

First off, Debian was released in 1993. Ubuntu was released in 2004 with the “Warty Warthog” release. Simply put, Ubuntu started as a fork of the Debian distro. The younger distro has another philosophy than that of Debian, and it can be summed up as follows: Ubuntu introduced concepts and tools that were not available in Debian. One important difference is package selection – Ubuntu enables users to choose which package for a certain software to install. Ubuntu provides sets of packages, that are bundled together in a software “universe”.

Fixed release cycles

Ubuntu has a fixed release cycle of six months. Support is offered for 18 months for each release, which simplifies commercial usage of the operating system.
Compare this with Debian that always maintains three releases, stable, testing and unstable. The first is the latest official release. The second release will always contain packages that are still being tested (and to be included in the next stable release). Lastly, the unstable release is where Debian developers work to add new software and improve old software. Every stable release has three years of full support and an extra two years for LTS (read more about Debian Long Term Support).

You can also find the old stable release that contains the last stable release, if you ever need to go back one release cycle iteration.
If we take a look at the last Debian releases, we can see that the release cycle is much slower, in fact approximately two years.

  • Jessie, 2015
  • Wheezy, 2013
  • Squeeze, 2011
  • Lenny, 2009
  • Etch, 2007

The similarities are just as interesting

Developers often work on software for both Debian and Ubuntu. Both distros embrace the free software philosophy, namely to create an operating system with entirely free software. For instance, when developers fix bugs in Ubuntu, they also do so inadvertently for Debian, as the two distributions share plenty of software packages. Bug fixes are simply sent to Debian developers to include in new releases. How is that for knowledge sharing and a helpful attitude! Keep in mind Ubuntu’s regular release cycles and package management, with Ubuntu’s free and Canonical supported packages. And you see, that is where the company Canonical comes into the picture. Hopefully you are now slightly more ready dig more into the differences and similarities of the two related Linux distros. Have fun!

Thinking of the LPIC-1 certification? Check out the career paths

Earlier this year Tom’s IT Pro provided a summary of the career paths available with the LPIC certification exams as a reference point.

We would like to highlight the very well described differences between LPIC-1 and LPIC-2:
Whereas the LPIC-1 certification shows that a Linux user can install and work on a workstation running Linux, the LPIC-2 certification also recognizes the ability to work with the Linux kernel as well as doing capacity planning.

Seems logical enough right? Well, at the same time, remember that working with the Linux kernel is rarely done by many Linux engineers. So if you choose to go that path you are already digging deep into Linux.

Similarly, working with daemons and configuring them to run at different runlevels is something a beginning Linux user can find useful as well.

The point is: There is an overlap of skills in these exams. A good approach from the start is find your way on the commandline and consider what you really want to use Linux for in the future. Then fill in the knowledge blanks as you progress.

Read the concise LPIC career path summary at Tom’s IT Pro.