Increase efficiency of manual testing in Linux bash shell

Linux is de-facto world standard for server OS (despite there are still some cases when we need Windows OS for hosting our server components). Hence QA engineers might spend large amount of time performing their manual testing tasks in Linux environments via remote ssh login. There are few principles, commands and tricks which can help you to perform such tasks with higher efficiency. Today we’re going to review the most valuable of them. They are:

Let’s now go one by one looking at them in more details.

Understanding file privileges concept

When you perform testing in Linux environment (either manual or automated) your application under test might fail to access certain files or devices (which are also files in Linux). It might also happen that you will need to create such access problems for the application intentionally. Understanding the privileges concept will help to bring the efficiency of testing and problem localization to a new level.

If you create a file in some folder and then list it (with ls -la command) you will likely see the picture like this:

drwxr-xr-x 2 alexey alexey 4096 may 29 14:38 .
drwxr-xr-x 3 alexey alexey 4096 may 29 14:38 ..
-rw-r--r-- 1 alexey alexey    0 may 29 14:38 myFile

This output consists of 7 columns. Which are:

  1. Access permissions for a file

  2. Number of hard links to an entry

  3. User name associated with an entry

  4. Group name associated with an entry

  5. Size of an entry

  6. Last modification date

  7. Name of an entry

Except of the file itself you can see the entries with the names . and ... The entry . defines the current folder you list the files in, the entry .. defines the parent folder of that folder.

We are basically interested in the columns 1, 3 and 4 because they define who can access the files and who can not.

Each value in column 1 consists of 10 positions. The first one shows us if an entry is a folder or not. For folders it is set to d, for files - to -.

What does all this mean?

The following 9 positions are 3 groups of permissions. First group is what an owner (column number 3) can do with a file, the second group defines what a group of users (column number 4) can do with a file, and the last one defines what all other users can do with a file.

Each group of permissions consists of three positions. They are read, write, and execute. If permission is enabled then the position is filled with corresponding letter, if not - with dash character (-).

If execute permission is enabled for folder entry then it allows a user to navigate to that folder.

If to look at the above example we can see that the file does not have d at the first position (since it is not a folder indeed), it is associated with the owner alexey and a group of users called alexey (each Linux user has associated group named after that user). The owner is allowed to read and write the file (but disallowed to execute it) according to the positions of keys rw-, users from group alexey are allowed only to read the file (r--) as well as all other users (r--). Let’s change the things a bit.

Modifying permissions in Linux

Example: For the file we have created before we want to modify access privileges. We have a group webelementclick that contains users which we would like to be only authorized (except of the owner) to read that file. All other users should not be permitted to even read it.

To do that we will need to use two commands: chgrp for changing group for a file and chmod to change the permission. You can also change owning user with chown command but it won’t be used in our example. So lets modify the group, list the files again and watch what has changed.

sudo chgrp webelementclick myFile
ls -la
total 8
drwxr-xr-x 2 alexey alexey          4096 may 29 14:38 .
drwxr-xr-x 3 alexey alexey          4096 may 29 14:38 ..
-rw-r--r-- 1 alexey webelementclick    0 may 29 14:38 myFile

We can see that the only thing that was changed is the name in column 4. Now we need to modify the privileges. You can modify the privileges using chmod command. But how do you tell Linux which permission and for which permission group to enable and which to disable?

This can be done by adding certain parameters to the command. Parameter has to be in the following form: XYZ. Where X defines which permission group is to affect. It can be either u if we modify permission for owning user or g if we modify permission for associated group, or o if we modify permissions for other users. Y can be either + if we want to add permission or - if we want to remove permission. Z defines the permission we need to modify. It can be either r if we modify read access or w if we modify write access or x if we modify execute access.

So, in our example we need remove read access from other users. Which means that we will use the command chmod o-r myFile which one should read as "Modify permissions for myFile so that other users (o) which are not either owner alexey or members of webelementclick group would lose (-) privilege for read (r)".

Let’s now list our files one more time:

drwxr-xr-x 2 alexey alexey          4096 may 29 14:38 .
drwxr-xr-x 3 alexey alexey          4096 may 29 14:38 ..
-rw-r----- 1 alexey webelementclick    0 may 29 14:38 myFile

We can see that r that used to fill position 8 is now changed to - which means that other users do not have any kind of access (---) to your file.

Effective navigation through folders

When you work in a branchy folder structure you need the convenient way to navigate through that structure and also the way to quickly switch to some places where you need to be more often than in other ones. But first of all it is always useful to know which folder you are in currently. You can know that by calling the command pwd.

alexey@master-host:~$ pwd

Auto-filling with Tab key

Since when you test something manually using Linux terminal you do not have any GUI it is often pretty inconvenient to input long commands and/or long folder and file names. Fortunately bash features some workaround for this. If you start typing a command or a file/folder name and then press tab key it will look for available command or files starting from the string you have input. If there are several such files or commands it fills the command line with the common part of such files matching your original input. Then you can proceed typing and press tab again so that bash would fill the next part of the input (or the entire file/command name if there are no other items matching your input).

For example we have the following files in the current folder:

drwxr-xr-x 2 alexey alexey          4096 may 30 13:29 .
drwxr-xr-x 3 alexey alexey          4096 may 29 14:38 ..
-rw-r----- 1 alexey webelementclick    0 may 29 14:38 myFile
-rw-r--r-- 1 alexey alexey             0 may 30 13:29 myFile.backup

Assume that we want to view the content of myFile.backup file with cat command. We can type cat m and then press tab. After we have pressed tab bash auto-fills our command to cat myFile because it cannot know whether we mean myFile or we want to proceed typing to choose myFile.backup. Since we need the latter we add . to the command so that it is now cat myFile. and hit tab again. Now bash knows which file we do mean and auto-fills the remaining part so that it is now cat myFile.backup

There is another trick you need to know. Let’s look at the previous example again. When you see that after pressing tab your command has not been auto-filled completely that means that there is ambiguity detected and you need provide more characters. You can find out which files or commands cause the ambiguity by hitting tab twice.

Navigation short-cuts

There are few shortcuts which Linux system and and bash shell provide for navigation. They are ~ (that is supplied by bash), . and ... Shortcut ~ leads to the home folder of your current user. Thus if you have a file myFile in your home folder you can access it using the path ~/myFile wherever you are currently located.

Shortcuts . and .. were discussed previously. They refer to current folder and to parent folder correspondingly. So you can navigate through folders using those shortcuts. For example if you want to navigate to the parent folder of your home folder you can use the command cd ~/.. (from any current location).

You can create your own shortcuts which are called symbolic links or soft links. The good solution would be to place such links to the home folder of your user (which is accessible by ~ shortcut). For example assume that you have a folder /mnt/mywork/mainfolder. You can create a link that would be named for example main and place it to the home folder using the following command:

ln -s /mnt/mywork/mainfolder ~/main

Now you can use it in your navigation. For example cd ~/main would bring you to /mnt/mywork/mainfolder

Chaining the commands

When you perform manual testing tasks in Linux environments it is often useful to combine several commands in one line in the way when the output of one command becomes the input of another command. This can be achieved with using | (a pipe) symbol.

The most popular use-case is filtering command output using grep command. For example command cat mylog | grep error will be broken down by two commands under the hood: cat mylog will print out the content of mylog file. Command grep error filters all the lines of the input data so that the only lines with error substring will remain. Symbol | means that data that cat mylog has produced becomes the input for grep error command.

Such chaining can be as long as it is required by your particular objective. For example testers often need to examine long files which are much longer than their terminal windows support. Even after using grep the filtered content still takes several screen heights. For long files it makes sense to use less command which allows you to scroll the content using Up, Down, PgUp and PgDown keys. So in such the case your command might look like this:

cat mylog | grep error | less

That consists of three sub-commands: cat mylog prints all the content of a file, grep error leaves the only lines with error substring, and less takes the filtered output and warps it into convenient scrollable view.

Examining process hierarchy

When you test applications manually in Linux you often need to operate with the processes. In Linux there is quite a powerful command serving such purposes that is called ps. Sometimes a tester needs to know what place does a certain process take in the entire process hierarchy. This is useful for example when you need to kill set of processes which has the common parent process. Thus knowing the root process you can kill it so that child processes will also be killed.

There is a special key for ps command that is called --forest. For example the command ps -a --forest will produce the output like this:

  PID TTY          TIME CMD
28670 pts/0    00:00:00 ps
 1957 tty2     00:00:00 gnome-session-b
 2061 tty2     00:05:35  \_ gnome-shell
 2106 tty2     00:00:09  |   \_ ibus-daemon
 2110 tty2     00:00:00  |   |   \_ ibus-dconf
 2362 tty2     00:00:02  |   |   \_ ibus-engine-sim
 5706 tty2     00:00:00  |   \_
 5771 tty2     00:15:33  |       \_ java
 5855 tty2     00:00:03  |           \_ fsnotifier64
 6237 tty2     00:00:13  |           \_ java
27809 tty2     00:00:05  |           \_ java
 2182 tty2     00:00:00  \_ gsd-power
 2183 tty2     00:00:00  \_ gsd-print-notif

If the tree is too long you can chain this command with less like it was shown before: ps -a --forest | less. This will wrap the entire tree with a handy scrollable view.

Monitoring folder content

Another usual thing in manual testing is monitoring folder content. For example you might want to watch the moment when a certain file appears in the target folder, or when it will change the modification date or whatever else. Linux provides a facility for watching anything. It is watch command. This command takes another command as a parameter and reruns it with the configured intervals (which is 2 seconds by default).

If we need to monitor a folder content we can use the command watch ls -la.

Using environment variables

One of the regular ways of how the applications are designed to work in different environments is using environment variables. When the application needs to know what is the value of some setting for this particular environment (for example what is the folder where the application is allowed to save reports to) it reads the environment variable that is set by someone responsible for hosting the app on this particular machine (for example devops guy).

You probably encountered some of such variables before. For example JAVA_HOME or PATH are two of well-known environment variables which you could probably meet. If you are a manual tester you will probably need to run your applications under different environmental context, hence you will have to work with environment variables when perform either manual or automated testing tasks.

When you set up a variable there are five things you need to know

  1. Environment variable is set using export command in the format export VAR_NAME=VAR_VALUE.

  2. When you specify a value for some variable you can concatenate values of another variables. For example export MY_VAR=Hello would set Hello value for variable MY_VAR. Further running export MY_OTHER_VAR="$MY_VAR, World!" would set value Hello, World! for variable MY_OTHER_VAR.

  3. If you use the value of some variable within the value of some other variable there might be the case when you need concatenate some letters right after the value of another variable. To let the shell know where the name of previous variable ends and where the altered value begins you should use the construction like: export MY_VAR=Hel, export MY_OTHER_VAR=${MY_VAR}lo.

  4. Value of a variable is accessible using either $VAR_NAME or ${VAR_NAME} construction (see above). For example you can print out a value of MY_VAR using echo command in the following ways: either echo $MY_VAR or echo ${MY_VAR}

  5. When you use export command to set a value for the variable, all the processes which are originated from where you set such value (either a terminal or a script) inherit the value you have set. Thus you can use a script which sets some value for the variable and executes an application that is to read that variable and then change the script so that another value is assigned to that variable. Using such approach you can run two instance of the application which will be running under different environmental context.

Using effective filters with grep

We were already mentioning grep command when we were talking about chaining the commands. However using grep is really the most often thing which QA engineers use in their manual testing tasks in Linux. In combination with cat (printing file content) and probably less (wrapping content into convenient scrollable view) grep provides a powerful mechanism for analyzing the logs or configuration files. Here are few tricks which will help you to increase the efficiency of using grep.

As the example we will be using the file that is called myTest.log with the following content:

DEBUG	13:43:59	Running in dev mode
INFO	13:44:01	Reading value a: 5.25
INFO	13:44:02	Reading value b: 0.0
INFO	13:44:02	Evaluating division
ERROR	13:44:03	Division by zero error!
INFO	13:44:04	Sending email report..

Such log is representative enough to cover the tricks we’re going to talk about.

Simple cases

You might want to look at all the records having DEBUG substring. This can be done with the command

cat myTest.log | grep DEBUG

Very simple. By default the search is case sensitive. This means that the command above will look for DEBUG but not for debug (in lower-case). To change the default behavior you can use -i key. So that the command looking for both DEBUG and debug will look like this:

cat myTest.log | grep -i DEBUG

If you want to look for a string with white-spaces you need two wrap such string with double quotes.

cat myTest.log | grep "Evaluating division"

Showing surrounding lines

With grep you can show some context for the lines which meet search conditions. With keys -B (before) and -A (after) you can specify the number of contextual lines showing for each result. For example you might want to show 3 lines before and 1 line after each line that contains ERROR word. Then the command would look like:

cat myTest.log | grep -B3 -A1 ERROR

Power up your search with regular expressions

The search string that you pass to grep as the parameter can be treated by grep in several different ways. Either as a fixed string or as a regular expression. There are also several types of regular expressions can be supported. To distinguish them check your particular grep implementation documentation (man grep command). We are talking about GNU grep here. It is supplied with lots of free Linux builds.

The example shown above has deliberate trap. When we used cat myTest.log | grep DEBUG we got the expected results since DEBUG word only exists in the log message type column. However if we would query the same for INFO type we would get the following:

INFO	13:44:01	Reading value a: 5.25
INFO	13:44:02	Reading value b: 0.0
INFO	13:44:02	Evaluating division
INFO	13:44:04	Sending email report..

The last line is not actually what we expect to see but it falls into query result because of the word INFORMATION starts with INFO

Using regular expression syntax can help to build our search strings in more precise manner. However covering even basic aspects of regular expressions language is a topic for separate article so here I will just show couple of examples how you could improve your searching:

This command will show you only the lines which start from INFO (symbol ^ means start of the string so ^INFO would not match the last debug entry):

cat myTest.log | grep ^INFO

Here is more complex expression, taking all the entries where the timestamp is 13:44 (here are so called character classes are used):

cat myTest.log | grep ^[[:alpha:]]*[[:space:]]*13:44

By the way nothing stops you from chaining grep command with another grep command. For example:

cat myTest.log | grep ^INFO | grep [[:digit:]]$

will show you the lines starting with INFO and ending with a digit ($ represents the end of the line).

Touching files. Why do we need this?

There are few useful things which touch command might give to testers which perform their manual testing tasks in Linux. Basically this command changes the timestamp of a file either to the current time or to a specific time that is passed to the command as an argument. It also creates new empty file if the file does not exist (so touch can be used as a fast way to create a file).

Such feature is really helpful when you work with applications which monitor file modification because they usually monitor if modification date has been changed since the last check. For example JBoss is one of such applications. It you need to redeploy some hosted app within JBoss (e.g. myapp.ear) you just go to deployment folder and run touch myapp.ear.

Monitor logs with tail command

Sometimes when you execute manual testing of an application you need to watch the logs in real time so that you will know when a certain event appears. Linux provides tail command for such purpose. This command can be configured to meet your particular needs (as the most of other ones in Linux), but I would recommend considering just two things:

  1. Use -F key which is basically a shortcut for two keys: --follow=name --retry. This will tell tail to update the result when new data comes to a file and at the same time wait for a file if there will be some problems in reading the data. The latter is important when you monitor logs which are being rolled when reach certain size. In such the case the current log file is renamed and new empty file is created. Without using --retry your tail command will lose the file and stop printing the updates.

  2. Combine tail and grep so that you can filter what tail command produces in real time.

So your command could look like this:

tail -F myapp.log | grep ^INFO

Looking for more logs

If you are sure that something is going wrong with your application but there are no relevant log entries in your application logs, there are still few places which you can examine when investigate the issue. There is a folder /var/log that contains logging journals which are being filled with a special system process (so basically any app can log the entry to those files using particular system function call). If something is going wrong then it is a chance that the logging message was dropped to one of those log.

Even if there are no logs produced by your application under test, you can get a clue on what has happened by examining the logs for possible infrastructural issues (like request authorization problems, or blocking packets by firewall rules).

Knowing this basic principles and tricks will let you to perform your manual testing tasks in Linux with really high efficiency. If you still have the questions please send them to me using this form. I will amend the article according to your feedback.