Simple Web Filtering with Squid

Squid is an open source tool used as a proxy server.  This may be utilized to speed up internet use by caching visited pages and serving up the cache when users on the network request a previously visited page.  It can also be used to do some web filtering – black-listing (blocking sites) or white-listing (only allowing certain sites) using user-defined lists.

If you want to take a different route with web filtering than plugging in good or bad domains yourself, there are third-party tools such as SquidGuard.  SquidGuard will provided you blacklists that are updated on a frequent basis.  The advantage to using a tool like this is you may not know all the websites that should be blocked.  This tool uses blacklists in categories, i.e. gaming sites, auction sites, pornography, and a number of other categories.

Here, however, I will demonstrate how to set up some simple filtering with Squid without any additional tools.  It’s easy!

Done on: Ubuntu Linux

FIRST:

Download and run Squid.  I used apt-get, which is the Ubuntu (i.e. Debian) package manager.  The advantage of using this is that it handles all dependencies for you, where downloading the source and compiling it to install it yourself may not.

sudo apt-get install squid

Once that is done, take a look at the config file it installed by default in /etc/squid.

sudo vim /etc/squid/squid.conf

Next, find the part of the file that talks about access lists. You can get there quickly with one of the following:

/Access List
[or]
430gg

If you are not familiar with vim (my editor of choice), the slash “/” serves as a find command, and the 430gg tells the editor to go to line 430 (which is about where the ACL section should start). Here it proceeds to provide instruction on ACL syntax. You can read this if you’d like. Next, jump down a ways to line 656 or so where it starts talking about http_access. The syntax is: http_access allow|deny aclname.

We need to have an acl file to play with in the first place, so quit out of the config file with “:q”.

Create an ACL file where you will plug in the sites to be blocked, and then go ahead and plug in some domains.

touch /etc/squid/bad.acl
vim /etc/squid/bad.acl

Add domains to the file:
.ebay.com
.amazon.com

You now have a basic ACL to test. Go to the squid config file (/etc/squid/squid.conf) and add the following to the end of the acl section (should be about line 630):

acl bad dstdomain "/etc/squid/bad.acl"

This tells Squid there is an ACL which we arbitrarily titled “bad” that affects destination domain of web browsing to whatever is found in the /etc/squid/bad.acl file. Now, the final step is to tell Squid to block these websites. In the config file, travel down to the http_access section (about line 680) and find the line: # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS”. After this, add:

http_access deny bad

This line should be followed by:

http_access allow localhost

Save and quit out of the config file. You’re ready to go! Start up the service:

sudo service squid start

Once it is running, open a browser and test it out. Note that you have to point the browser to the proxy to get it to work. In Firefox, this is in Edit > Preferences > Advanced > Network > Settings > Manual proxy configuration > Type in the IP address (or localhost if testing locally) and port 3128, the default port that Squid listens on. Also, click “Use this proxy for all protocols”.

Now, browse to a normal website like google.com and it should perform normally. Next browse to a site that should be blocked (I used ebay.com in the example) and you should see something like this:

Example access denial

Posted in HTTP, Linux, Security | Leave a comment

Update on firesheep

Firesheep has certainly stirred the pot, and is leading to a push toward SSL among some prominent sites, probably to the joy of Firesheep’s developer.

Here are some articles I found very interesting:

PCWorld: Facebook and Twitter Flunk Security Report Card

Facebook responds to Firesheep

Microsoft promises to fix Hotmail

If Firesheep concerns you, Gunnar Atli Sigurdsson has developed FireShephard, a program that is designed to thwart firesheep by shutting it down on open networks.

Posted in Attacks, HTTP, Security | Leave a comment

Firesheep and HTTP Hijacking

A Firefox add-in, “Firesheep” has helped to illuminate the need for security on websites that use only HTTP to communicate.  Here’s what it’s dealing with.

When you log into a website, you send the web server your username and password.  This information is almost always encrypted.  This should be encrypted, otherwise anyone listening on the network will be able to see your password passing through the network in plaintext.  But then what happens?

Once you’re logged in, the web server will often send your computer a cookie, which usually carries an ID number (called session ID or SID) which tells the server which running session you are.    It is actually quite uncommon for websites to encrypt traffic at this point.  This means that your cookie and session information are sent in the clear.  But what’s the big deal?

The big deal is that this leaves your traffic vulnerable to HTTP Hijacking, an attack which is fairly easy to perform.  Every time your perform an action on a HTTP web site you are logged into (clicking on a link, a button, or whatever), your computer is sending out a bunch of cookie information in the header of your HTTP packets.  An attacker can simply listen to your network traffic, pull the session information out of the packets, send your session ID to the web server and voilah! they are logged in to your account.  Many popular sites such as facebook, flickr, twitter, and many others are sending your session information in the clear, allowing it to be hijacked.  It’s just as easy as it sounds.  In fact, I tested this out (on myself using a virtual interface of a VM, to avoid ethical issues).

First, I installed Firesheep, tied it to my virtual interface, and told it to start capturing.  Next, I logged into my VM and started logging into websites.  Here’s a couple of screenshots:

Logging into facebook and other accounts in the virtual machine.

Image

 

I use Wireshark, a simple packet capture utility to watch my traffic.  Here is an example HTTP packet I captured from my VM.  The Green circle outlines the cookie and session information that is included in the packet that anyone can observe.
Image

Using Firesheep, a few of my sessions from the VM were captured.  I simply double-click the facebook capture, and ta-dah!  I am logged into my account without logging in.
Image

Why is this all important?  For one, this is not a new type of attack.  This attack is fairly simple and straightforward.  Firesheep is not especially complicated, it merely is designed to show how insecure the sites that we commonly use are. (In case you are wondering, Firesheep captured my gmail login but was not able to do anything with it – I could not log in).

Secondly, why don’t more sites use end-to-end encryption like SSL? Banking websites frequently encrypt all of their traffic using SSL.  This means even stuff like cookie are encrypted, so that anyone listening on the network will see nothing more than a bunch of garbled text.  The argument against using SSL is that is more costly and computationally expensive.  This argue is no longer valid according to Google.  They claim that switching over to SSL required no extra equipment to implement, used less than 1% of CPU load, and less than 2% of network overhead.

In reality, I think that many sites are simply apathetic at best toward security.  In the light of recent news, in certainly seems that Facebook’s security policy is mostly lip-service.

Spend the extra money and take the extra time to secure your web sites, and protect yourself against HTTP hijacking!

About firesheep

Posted in Attacks, HTTP, Security | Leave a comment

IT Infrastructure Monitoring with Nagios

Nagios is a widely used open-source IT infrastructure monitoring tool.  It is used to monitor physical devices on a network, such as servers, routers, firewalls, and even printers, and also logical devices such as network services like SMTP, SSH, HTTP, SNMP, and really anything you can think of.  Besides this, it also can monitor the states of various resources on a physical device – it can watch processor usage, memory usage, free hard drive space, and a myriad of cool stuff.

Nagios is built not just to monitor devices, but also to send out notifications when necessary.  These notifications can be configured to send out by email, SMS text, or really anything you could do from a command line.  That’s what’s great about it.  Imagine your phone buzzing at you at 2:00 a.m. to let you know that the Exchange server just went down — oh the joy!

Nagios Interface

Using Nagios Commands
There are many built-in commands that come with the default plugins installation.  I used version 1.4.11.  If you followed the default quickstart installation (found on the site) then your default directly will place a library of executables under /usr/local/nagios/libexec .  Looking in this folder will give you an idea of some of the things you can do that take little effort.  Here’s the basic process to actually use one of these commands.  In this example I’ll use the check_dns command.

1. Edit the appropriate object file.  Nagios recognizes devices by using the object files, typically found in /usr/local/nagios/etc/objects .  Keep in mind that I am assuming that you already have object files that you are using to monitor devices.  Add the following text to the object you want to add the check_dns to:

define service{
use                     generic-service
host_name               hostname
service_description     Check DNS
check_command           check_dns
}

2. Edit the commands.cfg file.  This file is also typically found in /usr/local/nagios/etc/objects . Above, you added a check_command line in the service definition you added to the object.  The code tells nagios that it’s using the check_dns command, however, this command does not yet exist.  Yes, there’s a script in the libexec folder I referred to earlier that is called check_dns, but nagios won’t use it unless we tell it how.  In the commands.cfg file, add the following code:

define command{
command_name    check_dns
command_line    /$HOSTNAME$/libexec/check_dns -H nagios.org -a 173.45.235.65
}

The above code simply defines the check_dns command used by nagios to run the check_dns script located in /usr/local/nagios/libexec with the following flags: -H nagios.org (try a DNS resolve to nagios.org) and -a 173.45.235.65 (the IP address returned should be this address which belongs to nagios.org.  If we left off the -a flag, the script would simply run and try a DNS resolve without checking if it’s returning the right server.

3. Edit the nagios.cfg file. This file is typically in /usr/local/nagios/etc/ .  Make sure this file has the cfg_file=/usr/local/nagios/etc/objects/yourfile.cfg (if you haven’t already done this).

4. Restart Nagios.

That’s it!  Now Nagios will be monitoring your DNS server to make sure it is resolving correctly.  Awesome!

Posted in Linux, Monitoring | Leave a comment

Pass the Salt and Passwords, Please

ImageI learned today about how passwords in Linux work.  (I did this in Ubuntu 10.04)

First of all, there are two important files to know about.  The first is /etc/passwd, which stores information about all users in the system and keeps the following information:

username:password:userID:groupID:description:home path:login shell

The password stored here will either be a hash of the actual password, or the letter ‘x’, or some other indicator, signifying the use of a shadow file.  Also, the description field can also be called the ‘Gecos field’.
Note: placing a ‘*’ in the password field is one way to disable the account.

Next, the shadow file, which in Ubuntu is kept at /etc/shadow, keeps the actual hash version of the password itself, along with other account information.  The shadow file holds the following information for each user (in the following format):

username:salt+password:password expiration:UID:GID:name:home directory:login shell

As shown above, the password field here also includes a salt.  A salt in cryptography is a random group of characters that are added to a user’s password to produce a different hash, thus adding an extra layer of defense against attackers using Rainbow tables.  (Since the same password used with two different user accounts might be assigned a different salt, the attacker will have to get not only the correct password but also the correct salt, which will slow them down).

Password field

The password field may look something like the following:

$6$SMZDfb6Z$IozwirA3vsT.64dXArWkNeweUX7.Yqyjdoe7BnGXcr5f.7bKsqkVJwBbkPFCWWEy1.lLKLzU.ZW6Rz5TkLayS.

Just looking at this string tells me a couple of things.  Here’s a breakdown:

$6$SMZDfb6Z$IozwirA3v…(cont’d): the section bolded here tells me that the password was hashed using the SHA-512 algorithm.  (If it was $1$, it would be MD5).

$6$SMZDfb6Z$IozwirA3v…(cont’d): this next section (not including the $’s) is my salt.

$6$SMZDfb6Z$IozwirA3v…(cont’d): this last section is the actual hash of the password, including the salt.

How is this useful?

I created a user called ‘superman’ with the password ‘loislane’.  The above text is what showed up in my shadow file.

You can retrieve this string with the following command (I plugged in  ‘SMZDfb6Z’ and ‘loislane’ as the salt text and password):

mkpasswd –hash=SHA-512 -S salt_text password

Note: you may have to download mkpasswd to do this.

This gives me the same string of text as what shows up in my Linux file.

Logging In
So here’s a summary of what actually happens when you log in to Linux:
1. User types in username and password
2. Linux grabs the salt from the shadow file and hashes my password with it
3. the password hash and the hash from the shadow file are compared
4. if the two match, I am granted access!

This was a simple little thing, but fun to learn about.

Posted in Linux, Security | 1 Comment

NIATEC Program and Scholarship for Service

Image

NIATEC Program

Towards the end of August 2010 I began graduate business school at Idaho State University, as well as work at the NIATEC center under the Scholarship for Service program.  The NIATEC (National Information Assurance Training and Education Center) is directed by Dr. Corey Schou, a renowned expert in the field of Information Assurance.  The Scholarship for Service program is offered through the federal government to students receiving education in the field of Information Assurance.  Under the scholarship, students receive tuition coverage and other funding in exchange for federal service following graduation.

Having been here only a month, I have already been given great oppImageortunities to learn about topics in the field of Information Assurance, and to apply that knowledge by earning CompTIA‘s Security+ Certification.

I intend to use this blog as a way to share some things I have learned and find interesting.  I hope you will enjoy it.

-Kevin

Posted in Uncategorized | 1 Comment

Kevin’s Tech Blog

Hello blog!  I intend to use this blog as a tool for me to record things that I have worked on, learned, or just find interesting within the world of technology.

Posted in Uncategorized | 1 Comment