Stay at Home!! Be Safe!! Take Care!!

For Any queries, please mail us at support@funoracleapps.com

LINUX AWK Tutorial Series | Chapter 6

LINUX AWK Tutorial Series | Chapter 6

Hello Guys.. Welcome

In this series, we are in chapter 6. Now I am going to discuss the conditional statements in AWK.


Now, what are these conditional statements? 

Conditional statements help to process a set of activities based on a certain condition which is defined.

The type of conditional statements which we see across the most programming language are here as well

  • if
  • else
  • else if
Sample syntax of if else

if (conditon true)
then actions;
else if (condition1 true)
then actions;
else
actions


Now let's see how to use them in AWK.


Example 1:

Now let's say I want to give a 10% appraisal to all employees having a salary of less than 30000.
I am going to use my Employee data csv file.




If you understand the previous chapters it would be easy to find what each part is doing. The only addition here is that I have put if (condition) and based on that printing my data.

Try at Home: Try to do the same with AWK  scripts


Example 2:

Now let's say I want to give a 10% appraisal to all employees having a salary of less than 30000 and 5% to all having greater than that.

Here I can make use of if/else blocks.




if and else will get separated by a semicolon(;) in between.


Try at Home: Try to do the same with AWK  scripts

Example 3:

Adding one more condition

 I want to give a 10% appraisal to all employees having a salary of less than 30000 and 5% to all having greater than that and 1 percent to all having more than 200000.

Now here I can make use of if, else if, and else. All three at one time




so I am checking that in 3 parts,
1) salary of less than 30000
2) salary of more than 200000
3) all other salaries

More chapters to continue.

If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 5

LINUX AWK Tutorial Series | Chapter 5


Hello Welcome... to this series. In this chapter, I am going to discuss the built-in variables in AWK

Built-In Variables


These variables are pre-defined and we can directly use them. Also, remember don't give user-defined variables the same as built-in variables.

Below are the built-in variables available in awk.

RS: record separator in the file being processed. The default value is a new line.
FS: Field separator. The default value is white space.
ORS: output record separator. The default value is a new line.
OFS: Output field separator. The default value is white space.
NR: Number of records processed by awk
NF: Number of fields in the current record
FNR: Current record number in each file. It is incremented each time a new record is read. It will be initialized to 0 once a new file is read
FILENAME: Name of file being read
ARGC: Number of arguments provided at the command line.
ARGV: Array that store command line arguments.
ENVIRON: Array of environnment varaibles and its values.


Remember they are case sensitive and have to be used in Capital only.

In this chapter, I am going to use a new file as well. You can create the same for practice.

random_file.txt

Name
Gender
Dept
Salary

Bob
M
HR
20000

Marlin
F
IT
300000

Peter
F
ADMIN
34455

Rosy
F
HR
78098

Pete
M
IT
89023



In the previous example for Employee Data, the field separator as comma(,) and record separator was a new line.



How to use value of FS variable

By default, the field separator is whitespace(which can be multiple space/tabs)
To overcome this we use -F (explained in earlier chapters)
But now we can use FS along with begin section to use it as a field separator.

Example 1:
In the previous chapter, we used an example to display the total salary of employees. There if you notice, I used "-F".

Now that can be replaced by using the FS built-in variable as below.



FS is used inside Begin in this example as a variable and assigned a value as comma(,).

Example 2:
The question here will arise that if FS is a variable then why only to be used with BEGIN.
See the below example and try to understand what happens when using FS without begin.



In the first awk command, I used FS="," as field separator and printing name and gender. But if you see the output the first line was printed fully from Employee data and then only name and gender were printed. So what AWK did here first it ran print(action) then came to FS to identify the separator and then from the second line the output was correct.

So in these scenarios either you should use BEGIN as it is pre-processing before the main action or use -F option of AWK command.

Example 3:

I want my records output to be separated by "|" then what can we do. 
So here we can use the OFS variable.



Please also note from above that BEGIN can be used standalone as well without END.
In the Begin section, I have defined FS and OFS and then doing an action.

If I need to have a blank line after each record in output then I will use ORS as below. By default, the ORS value is a new line so I have used \n\n twice. 


 
Example 4:

I have created a new random_file. In that file, if you see the fields are separated by a new line and records are separated by 2 new lines.
Now in this scenario, I will use FS and RS variables.


I hope this would be clear to understand. Any doubt, please update in the comments sections.


Example 5:

How we use NR variable




Here NR is printing the line numbers for each record displayed in output.

How to use NF variable.



This will display the total fields present in each record.

Example 6:

I want to print just the last field.
What should I do?? NF is the solution. print $NF and it prints the last fields of each record.




Find the total number of fields is less than 5.



Try at home: Write awk command to print first and last field of each record.


Example 7:


How to print the first 3 lines from a file.



I have used NR<4


Example 8:


How to read the first 3 lines from 2 different files.
Now I should use FNR



If observed, you can see FNR value was reset both times. But NR value will not reset with a new file and it holds a value from one file and increases the counter for the next file.

Example 9:

How to print file name being processed. I will use FILENAME variable. I am showing various methods below.



By now you should be easily understanding the commands used.


Try at Home: Print all employees data who are female and have a salary of more than 1000. Output fields should be separated by "|". Also, print the file name at the last.


The commands are getting bigger now, I am feeling difficult to write on the terminal. Is there any way to make a little easy ???

Yes, Try AWK scripts

AWK Scripts

In awk scripts, I will create a separate file with the awk command and actions and it will be passed to awk using the option "-f".


The command section is the part which we are writing in single quotes.

Example 10:


Here I have created a com.awk file and written the command which I was writing on the terminal and executing it by the following command.

awk -f <command_file> <file_name_for_processing>

No restriction on the command file extension is there.


More chapters will continue.



If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 4

LINUX AWK Tutorial Series | Chapter 4



Hello Guys, Hope you are enjoying the series. Remember practice will only help you grow and learn. So keep practicing!!

In this post, I am going to discuss the Variables and Operators in AWK command.


Variables and Operators

1) User Define Variables

So let's discuss the User-Defined Variables, As mentioned earlier AWK is a combination of a filtering tool and a programming language. So same as other programming languages it will also support Variables, Constants, operation, loops, etc..

Variable is a representation of a value referring to a variable name.
We will see both built-in and user-defined variables. 
I am going to focus on user-defined variables here.

Important Note:
  • No variable declaration is required in AWK same as shell scripting.
  • Variables will be initialized to null string or zero automatically.
  • Variable should begin with letter and can be followed by letters, numbers, underscores.
  • Variable are case sensitive. So be careful. Example: variable "Him" and variable "him" both will be treated separately.

Example 1:
Defining and variable and printing them




 So what we did here, Any Guesses...

I have defined a variable "a" which is storing a number, so no need for any quotes. (even if you give quotes no impact)
Variable "b" and "c" are storing string so they are enclosed within double-quotes.
All variables are separated by a semicolon(;)
Then I am printing the variables a b and c, but remember I used comma(,) between them so that they can use default delimiter to print the data.
Else the output will be like below. Why--It will work as a concatenation on the variables. Numbers and string will concatenate and will become a string.




Now please note you need to pass a file name as well, If you don't pass the system will keep waiting.
But why we have 7 lines in the output??
Reason: AWK processes each line by line of the file and every time, but does an action for printing only variable values.

Try at home: How to print only one line in this output. Feel free to write your answer in the comments section.

What happens when I only give print rather than print a,b,c. The system will not print variable values and just print normal file contents.


Example 2:

See the below screenshot, Awwww What happened here?




If I am doing an arithmetic operation between a number and string "a+b", then it will treat the variable having a string like "0".So in the output 20+0=20 is displayed.

If both are numbers then no issues, It will perform arithmetic operations. Please see the below screenshot.




2) Operators

In the above example, I used a "+" operator. There are multiple operators that can be used for various different purposes.


  • Arithmetic Operators: +,-,*,/,%,++,--
  • Assignment Operators:=,+=,-=,*=,/=
  • Relational Operators: <,>,>=,<=,!=
  • Logical operators: &&,||,!
  • String Comparison: ~,!~
  • String Concatenate: Blank Space

Example 3:

If we want to find all the employees who are Male and having a salary of less than 25000.


See the above screenshot, what I did.

-F--> used because the delimiter for my fields are comma
$2--> My second field contains the M/F column so matching it with "M"
&&--> Using logical operator for joining 2 conditions.
$4--> This is my salary column in the file which has to be less than  25000
print $1--> Printing the name of the employee as it is stored in field one.


Example 4:

Print the records from the employee file along with the sequence number using user-defined variables.



Explanation:

-F--> used because the delimiter for my fields are comma
++x--> Variable x along with arithmetic operator. As mentioned earlier by default variable initializes with 0 and I am adding 1 each time before printing variable x.
print $0--> print all fields


Example 5:

Compare name which starts with Rob and is Male.



I will not explain all syntax, I hope by now you would have understood all other parts. What new I have used here !!

I used $1~"Rob*" or $1~/Rob*/  --> If you remember // was used for pattern search same can be used or you can use double quotes ""
~ --> Is the string comparison operator.


If I want to print all employee name other than starting with Rob



Example 6:

Let's say if I want to find empty files in my employee data file.
Note: I am adding a few empty lines in my file using vi editor.



Explanation:

^$ --> This will search all patterns where the line is empty.^--> start of the line and $--> end of the line. so together it means no data in between.
x=x+1 --> Defining variable which will increase its value every time. By default, the variable value will be 0.
print x --> print value of x.

The same can be done via the below as well.





But the above output is not neat, I only want to display the total count once.

So that introduces our next concept of Begin and End

Begin and End

Begin Meaning: 

It sets an action on pre-processing. This will be executed first before the main execution takes place of the file processing.

End Meaning:

It sets an action on post-processing. This will be executed after the main execution takes place of the file processing.

These are optional procedures. Not required every time.  BEGIN and END keywords has to be written in Capital letters only.


Example 7: 


The total number of empty lines are displayed.

Begin --> WIll just print the string
/^$/{x=x+1} --> will keep adding each line
END --> Once end is encountered then print x will display the last value of x.


Example 8: 

Based on the same let's find the total salary of all employees.




I suppose this is self-explanatory, Please try to understand, if any doubts, feel free to mention in the comment section.

The next session will continue further


If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 3

LINUX AWK Tutorial Series | Chapter 3


Hello Everyone, Welcome Back...

In this chapter, let's get our hands dirty and start doing the basics.


Journey Begins

AWK will work on the field level of a record and that will help to do any operations on the individual fields.

example 1:




In this example, we are searching for a pattern "F" (Female Employees) from the Employee Data file and doing an action item which is "print"
Please observe that whatever search/action is performed those are on single quotes ''


If you remember the first chapter, that's how the AWK work.
Read file> Process lines > Search Pattern > Performs action



example 2:

Now AWK also can process records based on a regular expression.
Let's say I want to find all employee name starting with "Rob"






we are searching the pattern here which start with "Rob*". * means 0 or more characters.


example 3:

Now I want to find multiple patterns in a single file then we can use "|" pipe operator (or concatenate)



Here I am searching for Kevin and Alice in employee data and taking action as print.
We can use multiple patterns separated by "|"



Hope you have tried the above examples. Feel free to play around awk.

Parameters:

AWK will have various predefined parameters. These are special parameters that denote the specific fields of each record (line).

$1--> This will indicate the first field.
$2--> This will indicate the second field.
.... and so on for all the respective fields.

Question: Where is $0
Answer: $0 indicates the full lines which will have all fields.


AWK uses whitespaces as the default delimiter so our sample file is CSV which means comma-separated. So our delimiter would be "," in this case. 
Delimiter--> Separates 2 words/fields

So if we have to use a delimiter other than white space then we will use an option in AWK for a custom delimiter.

-F ','  --> It is hyphen capital F followed by the delimiter. If you are using space as a delimiter between words then no need to use this option.

So I will be using "-F" in our examples.


example 4:
Now I am going to use print $0.



Please notice I am using a custom delimiter and searching for Kevin, Alice, and Maria and printing the full line using $0.

Let's Say If I want to print only the Name, then that is the first field of each line. So we have to use $1.



Let's print the Name and Salary. Think and try.
Did you guessed it... No worries as per our delimiter ',' name is the first field and the salary is the 4th field.


Oh but look at the output that's Ugly, I didn't liked that!!!
What to do now

Now I am  going to use comma(,) between $1 and $4 which will put a by default delimiter in the output of AWK(Do you remember ?? Yes it was whitespace)


Now looks better than before, But if I want to add some custom message in between the output fields.


In the above screen shot, you can see, I am displaying a custom message in the output.
What does that mean
$1--> first field
,--> default delimiter(space)
"salary is"-->custom message string
,--> again default delimiter (space)
$4--> fourth field.


example 5:
If I don't want to search for any pattern and simply display all fields



Guess what I did here, just removed my search pattern.


Try doing this in your system.

If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 2

LINUX AWK Tutorial Series | Chapter 2

Hello Everyone, now in this chapter I am going to share the keywords and topics covered in the series of AWK. Let's learn some basic concepts which will be used a lot in this tutorial series.
AWK itself is like a full processing language and uses almost most of the concepts of any similar programming languages.


Important topics which would be covered in this series:

Delimiter: It is used to separate fields in AWK. By default delimiter for a file in AWK is space or white space, but we can use any other delimiter as well for separating the fields and processing based on the same. In simple words, we can say it helps in separating two words.

Variables :
Variables can be of the following types:

1) User Defined: It is defined by the user in AWK.
2) Built-in Variables: They are predefined in AWK and should not be used as user-defined variables
Example: RS, FS,ORS,OFS,NR,NF,ARGV, ARGC

Conditional Statements:
How to check specific conditions in AWK using

  • if
  • else
  • else if
Loops:

Loops help to rerun certain statements until specific conditions are met.

  • for
  • while
  • do-while
Search Pattern: Match specific patterns and process based on the same using single or multiple files.

Arrays: How we use arrays in AWK. Single and Multi-Dimensional Arrays

Functions: It can be a built-in Function which is already present in AWK or user-defined functions(we will create them)

Built-In Functions Examples
Arithmetic,Random,String, Input-Output,Timestamp

OS Used: I am using Linux 7(RHEL/Centos/OEL). These will work on Debian based systems as well.

Sample File used for example

If you want to practice, Please save the file in your respective directory. If I will use any other files going forward, that will also be shared.

Employee_Data.csv

Bob,M,Human Resources,30000
Alice,F,Human Resources,45000
Mark,M,Human Resources,30000
Robin,M,Human Resources,20000
Maria,F,Human Resources,30000
Kevin,M,Human Resources,60000
Robert,M,Human Resources,420000



The next part in the series will continue..


If you like please follow and comment

LINUX AWK Tutorial Series | Chapter 1

LINUX AWK Tutorial Series | Chapter 1

Hello Everyone, This is my tutorial series on Linux AWK. I will try to cover the AWK concepts chapter wise in this series and will try to explain in an easy method.


Introduction to Linux AWK

Linux AWK is a language for processing text files. AWK is typically used as data extraction and reporting tool. It is a standard feature of most Unix-like operating systems. It consists of a set of actions to be taken against streams of textual data for purposes of extracting or transforming text, for eg: producing formatted reports. The language uses the string datatype, associative arrays, and regular expressions.


AWK was created at Bell Labs in the 1970s and its name is an acronym derived from the surnames of its authors—Alfred Aho, Peter Weinberger, and Brian Kernighan.

Benefits:

  • AWK is used for searching and extracting data from a file. ​
  • It can also be used for manipulating the data and generate reports

WHAT CAN WE DO  WITH AWK​

AWK is like an independent programming language which consists of​

  •  Variables​
  •  Operators​
  •  Conditional Statements​
  •  Loops​
  •  Arrays​
  •  Functions(Built-in and User Defined)

HOW DOES THE AWK PROCESSING WORKS​


1) A file will be treated as a sequence of records
2) Each line would be considered as records having multiple fields.
3) It will search for a pattern
4) Performs action as mentioned in the command.





Please follow to get regular updates on this series


If you like please follow and comment

Understanding Linux Log Files

Understanding Linux Log Files


Log files are a lot of records that Linux keeps up for the sysadmins to monitor the significant and important events in the system. They contain messages about the kernel, services, and applications running on it.

The log files are found in  /var/log directory.

The log documents created in a Linux environment can commonly be characterized into four unique classes:

1)Application Logs
2)Event Logs
3)Service Logs
4) System Logs

Role of  Linux log files

Log is a fundamental aspect of any sysadmin duty. 

By observing Linux log files, you can increase a definite understanding of kernel execution, security, error messages, and warning issues. In the event that you need to take a proactive versus a receptive way to deal with the errors. For a sysadmin standard log record examination is 100% required. 

To put it plainly, log records permit you to envision forthcoming issues before they really happen.

Important Linux log files to keep an eye on

Monitoring and analyzing all of them can be a challenging task.

1) /var/log/messages

This log file contains generic system activity logs.
It is mainly used to store informational and non-critical system messages.
In Debian/Ubuntu-based systems,  /var/log/syslog will serves the same purpose.

It track non-kernel boot errors, application-related service errors and the messages that are logged during system startup.
This is the first log file that the Linux administrators should check if anything goes wrong.
For example, you are facing some issues with the network card. To check if something went wrong during the system startup process, you can have a look at the messages stored in this log file. 

2)/var/log/auth.log

All authentication-related events in Debian and Ubuntu servers are logged here.
If you’re looking for anything involving the user authorization mechanism, you can find it in this log file.

Suspect that there might have been a security breach in your server? Notice a suspicious javascript file where it shouldn’t be? If so, then find this log file asap!

Investigate failed login attempts
Investigate brute-force attacks and other vulnerabilities related to user authorization mechanism.

3)/var/log/secure


RedHat and CentOS-based systems use this log file instead of /var/log/auth.log. 

It is mainly used to track the usage of authorization systems.
It stores all security-related messages including authentication failures.
It also tracks sudo logins, SSH logins, and other errors logged by the system security services daemon.

All user authentication events are logged here.
This log file can provide detailed insight about unauthorized or failed login attempts
Can be very useful to detect possible hacking attempts.
It also stores information about successful logins and tracks the activities of valid users.

4)/var/log/boot.log


The system initialization script, /etc/init.d/bootmisc.sh, sends all bootup messages to this log file
This is the repository of booting related information and messages logged during the system startup process.

You should analyze this log file to investigate issues related to improper shutdown, unplanned reboots, or booting failures.
Can also be useful to determine the duration of system downtime caused by an unexpected shutdown.

5)/var/log/dmesg


This log file contains Kernel ring buffer messages.
Information related to hardware devices and their drivers are logged here.
As the kernel detects physical hardware devices associated with the server during the booting process, it captures the device status, hardware errors and other generic messages.
This log file is useful for dedicated server customers mostly.
If certain hardware is functioning improperly or not getting detected, then you can rely on this log file to troubleshoot the issue.
Or, you can purchase a managed server from us and we’ll monitor it for you.

6)/var/log/kern.log

This is a very important log file as it contains information logged by the kernel.
Perfect for troubleshooting kernel-related errors and warnings.
Kernel logs can be helpful to troubleshoot a custom-built kernel.
Helps in debugging hardware and connectivity issues.

7)/var/log/faillog

This file contains information on failed login attempts.
It can be a useful log file to find out any attempted security breaches involving username/password hacking and brute-force attacks.

8)/var/log/cron

This log file records information on cron jobs.
Whenever a cron job runs, this log file records all relevant information including successful execution and error messages in case of failures.
If you’re having problems with your scheduled cron, you need to check out this log file.

9)/var/log/yum.log

It contains the information that is logged when a new package is installed using the yum command.

Track the installation of system components and software packages.
Check the messages logged here to see whether a package was correctly installed or not.
Helps you troubleshoot issues related to software installations.
Suppose your server is behaving unusually and you suspect a recently installed software package to be the root cause for this issue. In such cases, you can check this log file to find out the packages that were installed recently and identify the malfunctioning program. 

10)/var/log/maillog or /var/log/mail.log

All mail server related logs are stored here.
Find information about postfix, smtpd, MailScanner, SpamAssassin or any other email-related services running on the mail server.
Track all the emails that were sent or received during a particular period
Investigate failed mail delivery issues.
Get information about possible spamming attempts blocked by the mail server.
Trace the origin of an incoming email by scrutinizing this log file.

11)/var/log/httpd/

This directory contains the logs recorded by the Apache server.
Apache server logging information is stored in two different log files – error_log and access_log.

The error_log contains messages related to httpd errors such as memory issues and other system-related errors.
This is the place where Apache server writes events and error records encountered while processing httpd requests.
If something goes wrong with the Apache webserver, check this log for diagnostic information.
Besides the error-log file, Apache also maintains a separate list of access_log.
All-access requests received over HTTP are stored in the access_log file.
Helps you keep track of every page served and every file loaded by Apache.
Logs the IP address and user ID of all clients that make connection requests to the server.
Stores information about the status of the access requests, – whether a response was sent successfully or the request resulted in a failure.

12)/var/log/mysqld.log or /var/log/mysql.log


As the name suggests, this is the MySQL log file, if it is installed.
All debug, failure and success messages related to the [mysqld] and [mysqld_safe] daemon are logged to this file.
RedHat, CentOS and Fedora stores MySQL logs under  /var/log/mysqld.log, while Debian and Ubuntu maintain the log in /var/log/mysql.log directory.

Use this log to identify problems while starting, running, or stopping mysqld.
Get information about client connections to the MySQL data directory
Information about query locks and slow running queries.


If you like please follow and comment

Shell Script with a Progress Bar

Shell Script with a Progress Bar

Do you wonder to write a shell script that can display a progress bar!! Those types of script look good and user-friendlier.

In this post, I am going to share how to write a simple shell script with Progress Bar.

To implement this what should I use ??

So here is the answer, I am going to use only the "echo" command and few options along with it.

Command: echo

Options used

-n: Don't append a new line
-e: enable interpretations of backslash escapes
\r: Go back beginning of the line with a new line


Please note we have to decide on the steps how much percentage we want to display. Make sure the last of each line should be in alignment. We can any steps in between as per action item

Sample Script

[himanshu@oel7 ~]$ cat progress_bar_test.sh
#!/bin/bash
##Progress Bar Sample Script##
echo "Work in progress"
echo -ne '=          [10%]\r'
sleep 2
echo -ne '===        [30%]\r'
sleep 2
echo -ne '=====      [50%]\r'
sleep 2
echo -ne '=======    [70%]\r'
sleep 2
echo -ne '==========[100%]\r'
echo -ne '\n'
sleep 2
echo "Sample script Completed"

Sample Output







If you like please follow and comment

How to Change EBS Homepage Branding

How to Change EBS Homepage Branding


If we want to change the branding on EBS Home Page. Mostly this is done in a cloned environment.





Steps:

1) Navigate: Application ---> Function
2) Query the function 
FWK_HOMEPAGE_BRAND and enter new value E-Business Suite – TEST
3) Save 
4) Logout and Relogin to verify

If you like please follow and comment

Error : ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [], [], [], [], [], []

Error : ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [], [], [], [], [], []



 Error: 

We can observe ora-600 in alert logs. Even if we run below query error can be seen
 
 select T.nls_territory from
apps.fnd_territories_vl T, v$nls_valid_values V
where T.nls_territory = V.value
and V.parameter = 'TERRITORY';


ERROR at line 2:
ORA-00600: internal error code, arguments: [1350], [1], [23], [], [], [], [],
[], [], [], [], []

SQL Developer connection will also give the same issue.




Solution:

 On DB server re-Create nls/data/9idata directory
1. cd $ORACLE_HOME/nls/data/
2. mv 9idata 9idata_old
3. perl $ORACLE_HOME/nls/data/old/cr9idata.pl
4. Check new 9idata was created!
After creating the directory, make sure that the ORA_NLS10 environment variable is set
to the full path of the 9idata directory whenever you enable the 11g Oracle home.
5. Restart the database and check the query again! It should return the data.





If you like please follow and comment