Stay at Home!! Be Safe!! Take Care!!

For Any queries, please mail us at

LINUX AWK Tutorial Series | Chapter 3

LINUX AWK Tutorial Series | Chapter 3

Hello Everyone, Welcome Back...

In this chapter, let's get our hands dirty and start doing the basics.

Journey Begins

AWK will work on the field level of a record and that will help to do any operations on the individual fields.

example 1:

In this example, we are searching for a pattern "F" (Female Employees) from the Employee Data file and doing an action item which is "print"
Please observe that whatever search/action is performed those are on single quotes ''

If you remember the first chapter, that's how the AWK work.
Read file> Process lines > Search Pattern > Performs action

example 2:

Now AWK also can process records based on a regular expression.
Let's say I want to find all employee name starting with "Rob"

we are searching the pattern here which start with "Rob*". * means 0 or more characters.

example 3:

Now I want to find multiple patterns in a single file then we can use "|" pipe operator (or concatenate)

Here I am searching for Kevin and Alice in employee data and taking action as print.
We can use multiple patterns separated by "|"

Hope you have tried the above examples. Feel free to play around awk.


AWK will have various predefined parameters. These are special parameters that denote the specific fields of each record (line).

$1--> This will indicate the first field.
$2--> This will indicate the second field.
.... and so on for all the respective fields.

Question: Where is $0
Answer: $0 indicates the full lines which will have all fields.

AWK uses whitespaces as the default delimiter so our sample file is CSV which means comma-separated. So our delimiter would be "," in this case. 
Delimiter--> Separates 2 words/fields

So if we have to use a delimiter other than white space then we will use an option in AWK for a custom delimiter.

-F ','  --> It is hyphen capital F followed by the delimiter. If you are using space as a delimiter between words then no need to use this option.

So I will be using "-F" in our examples.

example 4:
Now I am going to use print $0.

Please notice I am using a custom delimiter and searching for Kevin, Alice, and Maria and printing the full line using $0.

Let's Say If I want to print only the Name, then that is the first field of each line. So we have to use $1.

Let's print the Name and Salary. Think and try.
Did you guessed it... No worries as per our delimiter ',' name is the first field and the salary is the 4th field.

Oh but look at the output that's Ugly, I didn't liked that!!!
What to do now

Now I am  going to use comma(,) between $1 and $4 which will put a by default delimiter in the output of AWK(Do you remember ?? Yes it was whitespace)

Now looks better than before, But if I want to add some custom message in between the output fields.

In the above screen shot, you can see, I am displaying a custom message in the output.
What does that mean
$1--> first field
,--> default delimiter(space)
"salary is"-->custom message string
,--> again default delimiter (space)
$4--> fourth field.

example 5:
If I don't want to search for any pattern and simply display all fields

Guess what I did here, just removed my search pattern.

Try doing this in your system.

If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 2

LINUX AWK Tutorial Series | Chapter 2

Hello Everyone, now in this chapter I am going to share the keywords and topics covered in the series of AWK. Let's learn some basic concepts which will be used a lot in this tutorial series.
AWK itself is like a full processing language and uses almost most of the concepts of any similar programming languages.

Important topics which would be covered in this series:

Delimiter: It is used to separate fields in AWK. By default delimiter for a file in AWK is space or white space, but we can use any other delimiter as well for separating the fields and processing based on the same. In simple words, we can say it helps in separating two words.

Variables :
Variables can be of the following types:

1) User Defined: It is defined by the user in AWK.
2) Built-in Variables: They are predefined in AWK and should not be used as user-defined variables

Conditional Statements:
How to check specific conditions in AWK using

  • if
  • else
  • else if

Loops help to rerun certain statements until specific conditions are met.

  • for
  • while
  • do-while
Search Pattern: Match specific patterns and process based on the same using single or multiple files.

Arrays: How we use arrays in AWK. Single and Multi-Dimensional Arrays

Functions: It can be a built-in Function which is already present in AWK or user-defined functions(we will create them)

Built-In Functions Examples
Arithmetic,Random,String, Input-Output,Timestamp

OS Used: I am using Linux 7(RHEL/Centos/OEL). These will work on Debian based systems as well.

Sample File used for example

If you want to practice, Please save the file in your respective directory. If I will use any other files going forward, that will also be shared.


Bob,M,Human Resources,30000
Alice,F,Human Resources,45000
Mark,M,Human Resources,30000
Robin,M,Human Resources,20000
Maria,F,Human Resources,30000
Kevin,M,Human Resources,60000
Robert,M,Human Resources,420000

The next part in the series will continue..

If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 1

LINUX AWK Tutorial Series | Chapter 1

Hello Everyone, This is my tutorial series on Linux AWK. I will try to cover the AWK concepts chapter wise in this series and will try to explain in an easy method.

Introduction to Linux AWK

Linux AWK is a language for processing text files. AWK is typically used as data extraction and reporting tool. It is a standard feature of most Unix-like operating systems. It consists of a set of actions to be taken against streams of textual data for purposes of extracting or transforming text, for eg: producing formatted reports. The language uses the string datatype, associative arrays, and regular expressions.

AWK was created at Bell Labs in the 1970s and its name is an acronym derived from the surnames of its authors—Alfred Aho, Peter Weinberger, and Brian Kernighan.


  • AWK is used for searching and extracting data from a file. ​
  • It can also be used for manipulating the data and generate reports


AWK is like an independent programming language which consists of​

  •  Variables​
  •  Operators​
  •  Conditional Statements​
  •  Loops​
  •  Arrays​
  •  Functions(Built-in and User Defined)


1) A file will be treated as a sequence of records
2) Each line would be considered as records having multiple fields.
3) It will search for a pattern
4) Performs action as mentioned in the command.

Please follow to get regular updates on this series

If you like please follow and comment

Understanding Linux Log Files

Understanding Linux Log Files

Log files are a lot of records that Linux keeps up for the sysadmins to monitor the significant and important events in the system. They contain messages about the kernel, services, and applications running on it.

The log files are found in  /var/log directory.

The log documents created in a Linux environment can commonly be characterized into four unique classes:

1)Application Logs
2)Event Logs
3)Service Logs
4) System Logs

Role of  Linux log files

Log is a fundamental aspect of any sysadmin duty. 

By observing Linux log files, you can increase a definite understanding of kernel execution, security, error messages, and warning issues. In the event that you need to take a proactive versus a receptive way to deal with the errors. For a sysadmin standard log record examination is 100% required. 

To put it plainly, log records permit you to envision forthcoming issues before they really happen.

Important Linux log files to keep an eye on

Monitoring and analyzing all of them can be a challenging task.

1) /var/log/messages

This log file contains generic system activity logs.
It is mainly used to store informational and non-critical system messages.
In Debian/Ubuntu-based systems,  /var/log/syslog will serves the same purpose.

It track non-kernel boot errors, application-related service errors and the messages that are logged during system startup.
This is the first log file that the Linux administrators should check if anything goes wrong.
For example, you are facing some issues with the network card. To check if something went wrong during the system startup process, you can have a look at the messages stored in this log file. 


All authentication-related events in Debian and Ubuntu servers are logged here.
If you’re looking for anything involving the user authorization mechanism, you can find it in this log file.

Suspect that there might have been a security breach in your server? Notice a suspicious javascript file where it shouldn’t be? If so, then find this log file asap!

Investigate failed login attempts
Investigate brute-force attacks and other vulnerabilities related to user authorization mechanism.


RedHat and CentOS-based systems use this log file instead of /var/log/auth.log. 

It is mainly used to track the usage of authorization systems.
It stores all security-related messages including authentication failures.
It also tracks sudo logins, SSH logins, and other errors logged by the system security services daemon.

All user authentication events are logged here.
This log file can provide detailed insight about unauthorized or failed login attempts
Can be very useful to detect possible hacking attempts.
It also stores information about successful logins and tracks the activities of valid users.


The system initialization script, /etc/init.d/, sends all bootup messages to this log file
This is the repository of booting related information and messages logged during the system startup process.

You should analyze this log file to investigate issues related to improper shutdown, unplanned reboots, or booting failures.
Can also be useful to determine the duration of system downtime caused by an unexpected shutdown.


This log file contains Kernel ring buffer messages.
Information related to hardware devices and their drivers are logged here.
As the kernel detects physical hardware devices associated with the server during the booting process, it captures the device status, hardware errors and other generic messages.
This log file is useful for dedicated server customers mostly.
If certain hardware is functioning improperly or not getting detected, then you can rely on this log file to troubleshoot the issue.
Or, you can purchase a managed server from us and we’ll monitor it for you.


This is a very important log file as it contains information logged by the kernel.
Perfect for troubleshooting kernel-related errors and warnings.
Kernel logs can be helpful to troubleshoot a custom-built kernel.
Helps in debugging hardware and connectivity issues.


This file contains information on failed login attempts.
It can be a useful log file to find out any attempted security breaches involving username/password hacking and brute-force attacks.


This log file records information on cron jobs.
Whenever a cron job runs, this log file records all relevant information including successful execution and error messages in case of failures.
If you’re having problems with your scheduled cron, you need to check out this log file.


It contains the information that is logged when a new package is installed using the yum command.

Track the installation of system components and software packages.
Check the messages logged here to see whether a package was correctly installed or not.
Helps you troubleshoot issues related to software installations.
Suppose your server is behaving unusually and you suspect a recently installed software package to be the root cause for this issue. In such cases, you can check this log file to find out the packages that were installed recently and identify the malfunctioning program. 

10)/var/log/maillog or /var/log/mail.log

All mail server related logs are stored here.
Find information about postfix, smtpd, MailScanner, SpamAssassin or any other email-related services running on the mail server.
Track all the emails that were sent or received during a particular period
Investigate failed mail delivery issues.
Get information about possible spamming attempts blocked by the mail server.
Trace the origin of an incoming email by scrutinizing this log file.


This directory contains the logs recorded by the Apache server.
Apache server logging information is stored in two different log files – error_log and access_log.

The error_log contains messages related to httpd errors such as memory issues and other system-related errors.
This is the place where Apache server writes events and error records encountered while processing httpd requests.
If something goes wrong with the Apache webserver, check this log for diagnostic information.
Besides the error-log file, Apache also maintains a separate list of access_log.
All-access requests received over HTTP are stored in the access_log file.
Helps you keep track of every page served and every file loaded by Apache.
Logs the IP address and user ID of all clients that make connection requests to the server.
Stores information about the status of the access requests, – whether a response was sent successfully or the request resulted in a failure.

12)/var/log/mysqld.log or /var/log/mysql.log

As the name suggests, this is the MySQL log file, if it is installed.
All debug, failure and success messages related to the [mysqld] and [mysqld_safe] daemon are logged to this file.
RedHat, CentOS and Fedora stores MySQL logs under  /var/log/mysqld.log, while Debian and Ubuntu maintain the log in /var/log/mysql.log directory.

Use this log to identify problems while starting, running, or stopping mysqld.
Get information about client connections to the MySQL data directory
Information about query locks and slow running queries.

If you like please follow and comment

Shell Script with a Progress Bar

Shell Script with a Progress Bar

Do you wonder to write a shell script that can display a progress bar!! Those types of script look good and user-friendlier.

In this post, I am going to share how to write a simple shell script with Progress Bar.

To implement this what should I use ??

So here is the answer, I am going to use only the "echo" command and few options along with it.

Command: echo

Options used

-n: Don't append a new line
-e: enable interpretations of backslash escapes
\r: Go back beginning of the line with a new line

Please note we have to decide on the steps how much percentage we want to display. Make sure the last of each line should be in alignment. We can any steps in between as per action item

Sample Script

[himanshu@oel7 ~]$ cat
##Progress Bar Sample Script##
echo "Work in progress"
echo -ne '=          [10%]\r'
sleep 2
echo -ne '===        [30%]\r'
sleep 2
echo -ne '=====      [50%]\r'
sleep 2
echo -ne '=======    [70%]\r'
sleep 2
echo -ne '==========[100%]\r'
echo -ne '\n'
sleep 2
echo "Sample script Completed"

Sample Output

If you like please follow and comment

How to Change EBS Homepage Branding

How to Change EBS Homepage Branding

If we want to change the branding on EBS Home Page. Mostly this is done in a cloned environment.


1) Navigate: Application ---> Function
2) Query the function 
FWK_HOMEPAGE_BRAND and enter new value E-Business Suite – TEST
3) Save 
4) Logout and Relogin to verify

If you like please follow and comment

Error : ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [], [], [], [], [], []

Error : ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [], [], [], [], [], []


We can observe ora-600 in alert logs. Even if we run below query error can be seen
 select T.nls_territory from
apps.fnd_territories_vl T, v$nls_valid_values V
where T.nls_territory = V.value
and V.parameter = 'TERRITORY';

ERROR at line 2:
ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [],
[], [], [], [], []

SQL Developer connection will also give the same issue.


 On DB server re-Create nls/data/9idata directory
1. cd $ORACLE_HOME/nls/data/
2. mv 9idata 9idata_old
3. perl $ORACLE_HOME/nls/data/old/
4. Check new 9idata was created!
After creating the directory, make sure that the ORA_NLS10 environment variable is set
to the full path of the 9idata directory whenever you enable the 11g Oracle home.
5. Restart the database and check the query again! It should return the data.

If you like please follow and comment

Encrypt Shell Script on Linux using SHC

Encrypt Shell Script on Linux using SHC

shc - Generic shell script compiler

       shc [ -e date ] [ -m addr ] [ -i iopt ] [ -x cmnd ] [ -l lopt ] [ -o outfile ] [ -ABCDhUHvSr ] -f script

       shc creates a stripped binary executable version of the script specified with -f on the command line.

       The  binary version will get a .x extension appended by default if outfile is not defined with [-o outfile] option and will usually be a bit larger in size than the original ascii code.  Generated C source code is saved in a file with the extension .x.c or in a file specified with an appropriate option.

       If you supply an expiration date with the -e option, the compiled binary will refuse to run after the date specified.  The message Please contact your provider will be displayed instead.  This message can be changed with the -m option.

       You can compile any kind of shell script, but you need to supply valid -i, -x, and -l options.

       The compiled binary will still be dependent on the shell specified in the first line of the shell code (i.e.  #!/bin/sh), thus shc does not create completely independent binaries.

       shc  itself is  not  a compiler such as cc, it rather encodes and encrypts a shell script and generates C source code with the added expiration capability.  It then uses the system compiler to compile a stripped binary that behaves exactly like the original script.  Upon execution, the compiled binary will decrypt and  execute  the  code  with  the  shell  -c
       option.  Unfortunately, it will not give you any speed improvement as a real C program would.

       shc's main purpose is to protect your shell scripts from modification or inspection.  You can use it if you wish to distribute your scripts but don't want them to be easily readable
       by other people.

Steps to encrypt the files

1) Install SHC on Linux.

I am using Linux 7 here

 yum install shc 
download from

2) Create the shell script which needs to be encrypted. I am using a sample script I have created for this example.

[himanshu@oel7 ~]$ cat
echo $menu1
echo $menu2

[himanshu@oel7 ~]$ ./

3) Encrypt shell script using SHC.

[himanshu@oel7 ~]$ shc -f

Once encrypted 3 files would be created.

-rwxr-xr-x. 1 himanshu himanshu    62 Jul 21 20:40
-rw-rw-r--  1 himanshu himanshu 17771 Oct 18 23:17
-rwxrwxr-x  1 himanshu himanshu 11216 Oct 18 23:17 is the original unencrypted shell script is the encrypted shell script in binary format is the C source code of the file. This C source code is compiled to create the above-encrypted file. The whole logic behind the shc is to convert the shell script to C program (and of course compile that to generate the executable)

[himanshu@oel7 ~]$ file Bourne-Again shell script, ASCII text executable
[himanshu@oel7 ~]$ file C source, ASCII text
[himanshu@oel7 ~]$ file ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.32, BuildID[sha1]=469b3a7758b1b165130711853a80075b8c940c43, stripped

4) Run encrypted shell script.

[himanshu@oel7 ~]$ ./

Now for security, you can move the and to a safe location and use the only binary file.

Other cool features available with SHC are 

a) Specifying Expiration Date for Shell Script

Using shc you can also specify an expiration date. i.e After this expiration date when somebody tries to execute the shell script, they’ll get an error message.

Create a new encrypted shell script using “shc -e” option to specify an expiration date. The expiration date is specified in the dd/mm/yyyy format.

$ shc -e 17/10/2020 -f
In this example, if someone tries to execute the, after 31-Dec-2014, they’ll get a default expiration message as shown below.

[himanshu@oel7 bkp]$ shc -e 17/10/2020 -f
[himanshu@oel7 bkp]$ ls -ltr
total 36
-rwxr-xr-x. 1 himanshu himanshu    62 Jul 21 20:40
-rw-rw-r--  1 himanshu himanshu 17972 Oct 18 23:29
-rwxrwxr-x  1 himanshu himanshu 11256 Oct 18 23:29
[himanshu@oel7 bkp]$ ./
./ has expired!
Please contact your provider

In case you want to Set a custom message to display, use below
$ shc -e 17/10/2020 -m "Contact for latest version of this script" -f

[himanshu@oel7 bkp]$ shc -e 17/10/2020 -m "Contact for latest version of this script" -f
[himanshu@oel7 bkp]$ ./
./ has expired!
Contact for latest version of this script

b) Create Redistributable Encrypted Shell Scripts

Apart from -e, and -m (for expiration), you can also use the following options:

-r will relax security to create a redistributable binary that executes on other systems that runs the same operating system as the one on which it was compiled.

-v is for verbose

$[himanshu@oel7 bkp]$ shc -v -r  -f
shc shll=bash
shc [-i]=-c
shc [-x]=exec '%s' "$@"
shc [-l]=
shc opts=
shc: cc -o
shc: strip
shc: chmod ug=rwx,o=rx

If you like please follow and comment

Script to find which users have sudo access in Linux

Script to find which users have sudo access in Linux


for username in `cut -d: -f1 /etc/passwd`
  sudo -U $username -l

If you like please follow and comment

Query to find which all schema password will be changed when using ALLORACLE mode in FNDCPASS

Query to find which all schema password will be changed when using ALLORACLE mode in FNDCPASS

When we are changing the seeded schema password using FNDCPASS ALLORACLE mode then there might be custom schemas that would be registered as well. I we need to find out which custom/seeded schema password be changed using FNDCPASS, then we can use the below query.


select * from fnd_oracle_userid where read_only_flag='A';

If you like please follow and comment