Introduction
One of the primary goals of today’s network
administrators is to design, imple- ment, and maintain secure networks. This is
not always easy. No network can ever
be labeled “secure.” Security is an ongoing process involving a myriad of
protocols, procedures, and practices. This chapter focuses on some of the ele-
ments administrators use to keep their networks
as secure as possible.
Firewalls
In today’s network
environments, firewalls protect systems from both external and internal
threats. Although firewalls initially became popular in corporate environments, many home networks
with a broadband Internet connection now also implement a firewall to protect against
Internet-borne threats.
Essentially, a firewall is an
application, device, system, or group of systems that controls the flow of
traffic between two networks. The most common use of a firewall is to protect a
private network from a public network such as the Internet. However,
firewalls are also increasingly being used to separate a sensi-
tive area of a private
network from less-sensitive areas.
At its most basic, a firewall is a device (a computer system running firewall
soft- ware or a dedicated hardware
device) that has more than one network
interface. It manages the flow of network traffic
between those interfaces. How it manages the flow and what it does with
certain types of traffic depends on its configura- tion.
Strictly speaking, a firewall
performs no action on the packets it receives
besides the basic functions just described.
However, in a real-world implementation, a firewall is likely to offer other functionality, such as Network Address Translation (NAT) and proxy server services. Without NAT,
any host on the internal
network that needs to send or receive data through the firewall
needs a registered
IP address. Although such
environments exist, most people
have to settle for
using a private address range on the internal
network. Therefore, they rely on the firewall
system to trans- late the outgoing request into an acceptable
public network address.
Although the fundamental purpose of a firewall is to protect one network
from another, you need to configure the firewall to allow some traffic through.
If you don’t need to allow
traffic to pass through a firewall, you can dispense
with it and completely separate your network
from others.
A firewall can employ a variety of methods to ensure security. In addition to the
role just described, modern firewall applications can perform a range of other
functions, often through the addition of add-on modules:
. Content filtering: Most firewalls can be configured to provide some level of content filtering. This can be done for both inbound
and out- bound content.
For instance, the firewall can be configured to monitor inbound content,
restricting certain locations or particular websites.
Firewalls can also limit outbound traffic
by prohibiting access to certain websites by maintaining a list of
URLs and IP addresses. This is often done when organizations want to control
employee access to Internet sites.
. Signature identification: A signature
is a unique identifier for a particu-
lar application. In the antivirus world, a signature is an algorithm that
uniquely identifies a specific virus. Firewalls can be configured to detect
certain signatures that are associated with malware or other undesirable
applications and block them before they enter the network.
. Virus scanning services: As web pages are downloaded, content
within the pages can
be checked for
viruses. This feature
is attractive to compa-
nies that are concerned about potential threats from Internet-based sources.
. Network Address Translation (NAT): To protect the
identity of machines on the internal network, and to allow more flexibility in
inter- nal TCP/IP addressing structures, many firewalls translate the originat-
ing address of data into a different address. This address is then used on the
Internet. NAT is a popular function
because it works around the lim- ited availability of TCP/IP addresses.
. URL filtering: By using a variety of
methods, the firewall can choose to block certain websites from being accessed
by clients within the organi- zation. This blocking allows companies to control
what pages can be viewed and by whom.
. Bandwidth management:
Although it’s required in only
certain situa- tions, bandwidth management can prevent a certain user or system
from hogging the network connection. The most common approach to band-
width management is to divide the available
bandwidth into sections
and then make just a certain
section available to a user or system.
These functions are not strictly firewall activities. However, the flexibility offered by a firewall,
coupled with its placement at the edge of a network, makes a
firewall the ideal base for controlling access to external
resources.
Stateful and Stateless Firewalls
When talking about firewalls, two terms often come up—stateful and
stateless. These two terms
differentiate how firewalls operate. A stateless firewall, some- times
called a packet-filtering firewall, monitors specific data packets and
restricts or allows access to the network
based on certain
criteria. Stateless fire- walls look at each data packet in isolation
and therefore are unaware if that par- ticular data packet is part of a larger
data stream. Essentially, stateless firewalls do not see the big picture or “state” of data flow, only the individual packets. Today, stateful firewalls are more likely
to be used. Stateful firewalls monitor data traffic streams from one end to the
other. A stateful firewall refuses
unso- licited incoming traffic that does not comply with dynamic or
preconfigured firewall exception rules. A stateful firewall tracks the state of
network connec- tions, watching data traffic, including monitoring source and
destination addresses and TCP and UDP port
numbers.
Packet-Filtering Firewalls
Packet filtering enables
the firewall to examine each packet that passes through it and determine what to do with
it based on the configuration. A packet-filter- ing firewall deals with packets at the data link layer (Layer 2) and network
layer (Layer 3) of the Open Systems Interconnect (OSI) model. The
following are some of the criteria by which packet
filtering can be implemented:
. IP address: By using the IP address
as a parameter, the firewall can allow or deny traffic based on the source or
destination IP address. For example, you can configure the firewall so that
only certain hosts on the internal network can access hosts on the Internet.
Alternatively, you can configure it so that only certain hosts on the Internet
can gain access to a system on the internal network.
. Port number: As discussed in Chapter 5, “TCP/IP Routing and Addressing,” the TCP/IP suite uses port numbers to identify
which service a certain packet is destined
for. By configuring the firewall to allow certain types of traffic,
you can control the flow. You might, for example, open port 80
on the firewall to allow Hypertext Transfer Protocol
(HTTP) requests from users on the Internet
to reach the corporate
web server. Depending
on the application, you might also open the HTTP Secure (HTTPS) port, port 443, to allow access to a secure web server application.
. Protocol ID: Because each packet
transmitted with IP has a protocol identifier,
a firewall can read this value and then determine what kind of packet it is. If you are filtering
based on protocol ID, you specify which protocols you will and will not allow to pass through
the firewall.
. MAC address: This is perhaps the
least used of the packet-filtering methods discussed, but it is possible to
configure a firewall to use the hardware-configured MAC address as the
determining factor in whether access to the network is granted. This is not a
particularly flexible method, and therefore it is suitable only in environments
in which you can closely control who uses which MAC address. The Internet is
not such an environment.
Circuit-Level Firewalls
Circuit-level firewalls are similar
in operation to packet-filtering firewalls, but they operate at the transport
and session layers of the OSI model. The biggest difference between
a packet-filtering firewall
and a circuit-level firewall is that a circuit-level firewall validates TCP and
UDP sessions before opening a connec- tion, or circuit, through the firewall.
When the session is established, the fire- wall maintains a table of valid connections and lets data pass through
when ses- sion information matches an entry
in the table. The table
entry is removed,
and the circuit is closed when the session is terminated. Circuit-level
firewalls that operate at the session layer,
or Layer 5 of the OSI model, provided enough pro- tection in terms of
firewalls in their day. As attacks
become more sophisticated and include application layer attacks, circuit-level
firewalls might not provide enough protection by themselves.
Application
Layer Firewalls
As the name suggests, application layer firewalls operate
at the application layer of the OSI model. In operation, application layer firewalls
can inspect data pack-
ets traveling to or from an application. This allows the firewall to inspect,
mod- ify, block, and even redirect
data traffic as it sees fit. Application layer firewalls are sometimes called
proxy firewalls because they can proxy in each direction. This means that the
source and destination systems do not come in direct con- tact with each other. Instead,
the firewall proxy
serves as a middle point.
Comparing
Firewall Types
The following
list provides a quick comparison of the types of firewalls previ- ously discussed:
. Packet-filtering
firewalls operate at Layers 2 and 3 of the OSI model and are designed to
monitor traffic based on such criteria as source, port, or destination service
in individual IP packets. They’re usually very fast and transparent to users.
. Session
layer firewalls are also known as circuit-level firewalls. Typically these firewalls use NAT to protect the internal network. These
gateways have little or no connection to the application layer and therefore
cannot filter more complicated connections. These firewalls can protect traffic
on only a basic rule base such as source destination port.
. Application layer firewalls control
browser, Telnet, and FTP traffic,
pre- vent unwanted traffic, and perform logging
and auditing of traffic pass- ing through them.
Firewall Wrap-up
Firewalls have become
a necessity for organizations of all sizes.
They are a com-
mon sight in businesses and homes alike.
As the Internet becomes an increasing-
ly hostile place, firewalls and the individuals who understand them are likely
to become an essential part of the IT
landscape.
Demilitarized Zones (Perimeter Network)
An important
firewall-related concept is the demilitarized
zone (DMZ), some- times called a perimeter
network. A DMZ is part of a network where you place servers that must be accessible by sources both outside and inside your network.
However, the DMZ is not connected
directly to either network, and it must always
be accessed through
the firewall. The military term DMZ is used because it describes an area that has little or no enforcement or policing.
Using DMZs gives
your firewall configuration an extra level of flexibility,
pro- tection, and complexity.
By using a DMZ, you can create an additional step that makes it more
difficult for an intruder to gain access to the internal network.Forexam- ple, an intruder
who tried to come in through Interface
1 would have to spoof a
request from either the web server or proxy server into Interface 2 before it
could be forwarded to the internal network.
Although it is not impossible for an intruder to
gain access to the internal network through a DMZ, it is difficult.
Other Security Devices
A firewall is just one device we can use to help keep our networks
secure. It is not, however, the only
measure we can take. In many cases additional security strategies are required.
Three mentioned in the CompTIA Network+ objectives are IDS, IPS, and a VPN concentrator.
An intrusion prevention system
(IPS) is a network device
that continually scans
the network, looking for inappropriate activity.
It can shut down any potential threats. The IPS looks for any known
signatures of common attacks and auto-
matically tries to prevent those
attacks. An IPS is considered a reactive security measure because it actively
monitors and can take steps to correct a potential security threat.
An intrusion detection system (IDS) is a
passive detection system. The IDS can detect the presence of an attack and then
log that information. It also can alert an administrator to the potential
threat. The administrator then analyzes the sit-
uation and takes corrective measures
if needed.
Several
variations on IDSs exist:
. Network-Based Intrusion Detection System (NIDS):
The NIDS examines all network traffic to and from network systems. If it is
soft- ware, it is installed on servers or other systems
that can monitor
inbound traffic. If it is hardware, it may be connected to a hub or
switch to moni- tor traffic.
. Host-Based Intrusion Detection System (HIDS):
HIDS refers to applications such as spyware or virus applications that are
installed on individual network systems. The HIDS monitors and creates logs on
the local system.
. Protocol-Based Intrusion Detection System
(PIDS): The PIDS monitors
and analyzes protocols communicating between network devices. A PIDS is often
installed on a web server
and analyzes traffic HTTP and
HTTPS communications.
. Application Protocol-Based Intrusion Detection System
(APIDS): The
APIDS monitors application-specific protocols.
In addition to IPS and IDS, you can use VPN concentrators to increase
remote- access security. As
mentioned in Chapter 1, “Introduction to Networking,” a VPN provides a way to
transfer network data securely over a public network. The data transfer is
private, but the network is public or “virtual private.” A VPN can be created
using a hardware device known as a VPN concentrator. This device sits between
the VPN client and the VPN server, creates
the tun- nel, authenticates users using the tunnel, and encrypts data traveling
through the tunnel. When the VPN concentrator is in place, it can establish a
secure connection (tunnel) between
the sending and receiving network
devices.
VPN concentrators add an additional level to VPN security. Depending on the exact concentrator, they can
. Create the tunnel.
. Authenticate users who want to use the tunnel.
. Encrypt and decrypt
data.
. Regulate and monitor
data transfer across
the tunnel.
. Control inbound and outbound traffic
as a tunnel endpoint or router.
The VPN concentrator invokes various standard protocols to accomplish
these functions. These protocols are discussed later
in this chapter.
Honeypots and Honeynets
When talking about network security,
honeypots and honeynets
are often men- tioned. Honeypots are a rather clever
approach to network
security, but perhaps a bit expensive. A honeypot is a
system set up as a decoy to attract and deflect attacks from hackers. The
server decoy appears to have everything a regular server does—OS, applications,
network services. The attacker thinks he is accessing a real network
server, but, in fact, he is in a network trap.
The honeypot has two key purposes. It can give administrators valuable
infor- mation on the types of attacks being carried out. In turn, the honeypot
can secure the real production servers according to what it learns. Also, the
honey- pot deflects attention from working servers, allowing them to function
without being attacked.
A honeypot can
. Deflect the attention of attackers from production servers.
. Deter attackers if they suspect
their actions may
be monitored with
a honeypot.
.
Allow administrators to learn from the attacks to protect the real servers.
. Identify the source
of attacks, whether
from inside the
network or outside.
One step up from the honeypot is the honeynet. The honeynet is an entire
net- work set up to monitor
attacks from outsiders. All traffic into and out of the net-
work is carefully tracked and documented. This information is shared with net-
work professionals to help isolate
the types of attacks launched
against networks
and to proactively manage those security risks. Honeynets function
as a produc- tion network, using network services, applications, and
more. Attackers don’t know that they are actually accessing a monitored network.
Access Control Overview
Access control describes the
mechanisms used to filter network traffic to deter- mine who is and who is not
allowed to access the network and network resources. Firewalls, proxy servers,
routers, and individual computers all can maintain access control to some
degree. By limiting who can and cannot access the network and its resources, it is easy to understand why access control
plays an important role in security strategy.
Several types of access control strategies exist, as discussed in the following sections.
Mandatory Access Control (MAC)
Mandatory access control (MAC) is the most secure form of access control.
In systems configured to use mandatory
access control, administrators dictate who can access and modify data, systems,
and resources. MAC systems are common-
ly used in military installations, financial institutions, and, because of new priva- cy
laws, medical institutions.
MAC secures information and resources by assigning sensitivity labels to objects and users. When a user requests
access to an object, his sensitivity level is com- pared to the object’s. A label is a feature that is applied to files,
directories, and other resources in the system. It is similar to a
confidentiality stamp. When a label is placed on a file, it describes the level
of security for that specific file. It permits
access by files, users, programs,
and so on that have a similar
or higher security setting.
Discretionary
Access Control (DAC)
Unlike mandatory access control, discretionary access control (DAC) is
not forced from the administrator or operating system.
Instead, access is controlled
by an object’s owner. For example, if a secretary creates a folder, she decides who
will have access to that folder. This
access is configured using permissions and an access control list.
DAC uses an access control list (ACL) to determine access. The ACL is a
table that informs the operating system
of the rights each user has to a particular sys- tem object, such as a file, directory, or printer. Each
object has a security attrib- ute that identifies its ACL. The list has an entry for each system user with access privileges. The most common
privileges include the ability to read a file (or all the files
in a directory), to write
to the file or files,
and to execute the file (if it is
an executable file or program).
Microsoft Windows Servers/XP/Vista, Linux,
UNIX, and Mac OS X are among the operating systems that use
ACLs. The list is implemented differently by each operating system.
In Windows Server products, an ACL is associated with each system object.
Each ACL has one or more access control entries (ACEs) consisting of the name
of a user or group
of users. The user can also be a role name, such as “secretary” or “research.” For each of these users, groups,
or roles, the access privileges are stated in a string of bits called an access mask.
Generally, the system administra- tor
or the object owner creates the ACL for an object.
Rule-Based Access
Control (RBAC)
Rule-based access control controls access to objects according to
established rules. The configuration and security settings
established on a router or firewall
are a good example.
When a firewall is configured, rules are set up that control access to
the net- work. Requests are reviewed to see if the requestor meets the criteria
to be allowed access through the firewall. For instance, if a firewall is
configured to reject all addresses
in the 192.166.x.x IP address
range, and the requestor’s IP is in that range,
the request would be denied.
In a practical application, rule-based access control is a variation on
MAC. Administrators typically configure the firewall or other device to allow
or deny access. The owner or another
user does not specify the conditions of acceptance,
and safeguards ensure
that an average
user cannot change
settings on the devices.
Role-Based Access Control
(RoBAC)
In role-based access control (RoBAC), access decisions are determined by
the roles that individual users have within the organization. Role-based access
requires the administrator to have a thorough understanding of how a particu-
lar organization operates, the number of users, and each user’s exact
function in that organization.
Because access rights
are grouped by role name,
the use of resources is restrict-
ed to individuals who are authorized to assume the associated role. For example, within a school system, the role
of teacher can include access to certain data, including test banks, research
material, and memos. School administrators might have access to employee
records, financial data, planning projects, and more.
The use of roles to control access can be an effective means of
developing and enforcing enterprise-specific security policies and for
streamlining the security management process.
Roles should receive just the privilege level necessary to do the job
associated with that role. This general
security principle is known as the least privilege
con- cept. When someone is hired in an organization, his or
her role is clearly defined. A network administrator creates a user account for
the new employee and places that user account in a group with people who have
the same role in the organization.
Least privilege is often too restrictive to be practical in business. For
instance, using teachers as an example,
some more experienced teachers might have more
responsibility than others
and might require
increased access to a particular net- work object. Customizing access to each individual is a time-consuming process.
MAC Filtering
Filtering network traffic
using a system’s MAC address typically is done using
an ACL. This list keeps track of all MAC addresses and is configured to
allow or deny access to certain systems based on the list. As an example, let’s look at the MAC ACL from a router.
Figure 9.3 shows
the MAC ACL screen.
TCP/IP Filtering
Another type of
filtering that can be
used with an
ACL
is TCP/IP
filtering. The ACL
determines what types of IP traffic will be
let through the router. IP
traffic that is not permitted according to the
ACL is blocked. Depending on the type of IP fil-
tering used, the ACL can be configured
to allow or deny several
types of IP traffic:
. Protocol type: TCP, UDP, ICMP, SNMP, IP
. Port number used by protocols (for TCP/UPD)
. Message source address
.
Message destination address
Tunneling and Encryption
In the mid-1990s, Microsoft, IBM, and Cisco began working on a technology
called tunneling. By 1996, more companies
had become interested and involved in the work, and the project
soon produced two new virtual
private networking solutions:
Point-to-Point Tunneling Protocol
(PPTP) and Layer 2 Tunneling Protocol
(L2TP). Ascend, 3Com, Microsoft, and U.S. Robotics had developed PPTP, and Cisco
Systems had introduced the Layer 2 Forwarding (L2F)
protocol.
From these developments, virtual private networks (VPNs) became one of
the most popular methods of remote access. Essentially,
a VPN extends a local area network (LAN) by establishing a remote connection, a connection tunnel,
using a public network such as the Internet. A VPN provides a secure
point-to-point dedicated link between
two points over a public
IP network.
VPN encapsulates encrypted
data inside another
datagram that contains
routing information. The connection between two computers establishes a
switched connection dedicated to the two computers. The encrypted data is encapsulated inside Point-to-Protocol
(PPP), and that connection is used to deliver the data.
A VPN allows anyone with an Internet connection to use the infrastructure
of the public network to dial in to the main network and access resources as if she were logged on to the network locally. It also allows two networks to be
con- nected to each other securely.
Many elements are involved in
establishing a VPN connection:
. A VPN client: The VPN client is the computer
that initiates the con-
nection to the VPN server.
. A VPN server: The VPN server authenticates connections from VPN clients.
. An access method: As mentioned, a VPN is most often established over a
public network such
as the Internet; however, some
VPN implementa- tions use a private
intranet. The network
used must be IP-based.
. VPN protocols: Protocols are required to establish, manage,
and secure the data over the
VPN connection. PPTP and L2TP are commonly associated with VPN connections.
VPNs have become popular because they allow the public Internet to be
safely utilized as a wide area network (WAN)
connectivity solution. (A complete dis- cussion of VPNs would
easily fill another
book and goes beyond the scope of the
Network+ objectives.)
Point-to-Point Tunneling Protocol
(PPTP)
PPTP,
which is documented in RFC 2637, is often mentioned together
with PPP.
Although it’s used in dialup
connections, as PPP is, PPTP provides differ- ent functionality. It creates a
secure tunnel
between two points on a network, over which other connectivity
protocols, such as PPP, can be used.
This tunnel- ing functionality is the basis of VPNs.
To establish a PPTP session
between a client and server, a TCP
connection known as a PPTP control connection is required to create and maintain the com-
munication tunnel. The PPTP control connection exists between the IP address of
the PPTP client and the IP address of the PPTP server, using TCP port 1723 on the server and a dynamic port on
the client. It is the function of the PPTP control connection to pass the PPTP control and management messages
used to maintain the PPTP
communication tunnel between the remote system and the server. PPTP provides authenticated and encrypted communications
between two endpoints such as a client and server.
PPTP does not use a public key infra-
structure but does use a user ID and password.
PPTP uses the same authentication
methods as PPP, including MS-CHAP, CHAP, PAP, and EAP, which are discussed later
in this chapter.
Layer 2 Tunneling Protocol (L2TP)
L2TP is a combination of PPTP and Cisco’s L2F
technology. L2TP,
as
the name suggests, utilizes
tunneling to deliver data. It authenticates the client in a two-phase process:
It authenticates the computer and then the user. By authen- ticating the computer,
it prevents the data from being intercepted, changed, and
returned to the user in what is known as a man-in-tfte-middle attack. L2TP assures both parties that
the data they are receiving is exactly the data sent by the originator.
L2TP and PPTP are both tunneling protocols, so you might be wondering which you should use. Here is a
quick list of some of the advantages of each, starting with PPTP:
. PPTP has been around
longer; it offers
more interoperability than L2TP.
. PPTP is an industry
standard.
. PPTP is easier to configure than L2TP because
L2TP uses digital
cer- tificates.
. PPTP has less overhead
than L2TP.
The following
are some of the advantages of L2TP:
. L2TP offers greater security than PPTP.
. L2TP supports common
public key infrastructure technology.
. L2TP provides support
for header compression.
IPSec
The IP Security (IPSec) protocol is designed to provide secure
communications between systems. This includes system-to-system communication in
the same network, as well as communication to systems on external networks. IPSec is an IP
layer security protocol
that can both encrypt and authenticate network
trans- missions. In a nutshell, IPSec is composed of two separate
protocols— Authentication Header (AH) and Encapsulating Security Payload (ESP).
AH provides the authentication and integrity checking for data packets, and ESP
provides
encryption services.
Using both AH and ESP, data
traveling between systems can be secured, ensur- ing that transmissions cannot
be viewed, accessed, or modified by those who should not have access to them.
It might seem that protection on an internal network is less necessary than on
an external network; however, much of the data you send across networks has
little or no protection, allowing unwanted eyes to see it.
IPSec provides three key security services:
. Data verification: It verifies that the data
received is from
the intended source.
. Protection from data tampering: It ensures
that the data has not been
tampered with or changed between
the sending and receiving devices.
. Private transactions: It ensures
that the data sent between
the sending and receiving devices is unreadable by any other
devices.
IPSec operates at the network layer of the Open Systems Interconnect
(OSI) model and provides security for protocols that operate at the higher
layers. Thus, by using IPSec, you can secure practically all TCP/IP-related
communi- cations.
Remote-Access Protocols and Services
Today, there are many ways to
establish remote access into networks. Some of these include such things as
VPNs or Plain Old Telephone System (POTS)
dialup access. Regardless of the technique used for remote
access or the speed at which access is achieved, certain
technologies need to be in place for the magic to happen. These technologies include the protocols to allow access
to the serv- er and to secure the data transfer after the connection
is established. Also nec- essary are methods of access control that make sure
only authorized users are using the remote-access features.
All the major operating systems
include built-in support
for remote access.
They provide both the access methods
and security protocols necessary to secure
the connection and data
transfers.
Remote Access
Service (RAS)
RAS is a remote-access solution
included with Windows Server products. RAS is a feature-rich,
easy-to-configure, easy-to-use method of configuring remote access.
Any system that supports the appropriate dial-in protocols, such as PPP, can connect to a RAS server. Most commonly, the clients are Windows systems that use the
dialup networking feature, but any operating system that supports dialup client software will work. Connection to a RAS server
can be made over a stan- dard phone line, using a modem, over a network, or via
an ISDN connection.
RAS supports remote connectivity from all the major client operating
systems available today, including
all newer Windows OSs:
. Windows Server products
. Windows XP/Vista Home-based clients
. Windows XP/Vista Professional-based clients
. UNIX-based/Linux
clients
. Macintosh-based clients
Although the system is called RAS, the underlying technologies that
enable the RAS process are dialup protocols such as Serial Line Internet
Protocol (SLIP) and PPP.
SLIP
SLIP was designed to allow data to be transmitted via Transmission Control Protocol/Internet
Protocol (TCP/IP) over serial connections in a UNIX envi- ronment. SLIP did an
excellent job, but time proved to be its enemy.
SLIP was developed in an atmosphere in which security was not an
overriding concern; consequently, SLIP
does not support encryption or authentication. It transmits all the data used to establish a connection (username and password) in clear text, which is, of course,
dangerous in today’s insecure
world.
In addition to its inadequate security,
SLIP does not provide error checking or packet addressing, so it can be
used only in serial communications. It supports only TCP/IP, and you log in using a terminal window.
Many
operating systems still
provide at least
minimal SLIP support
for back- ward compatibility with older environments, but SLIP has been replaced
by a newer and more secure
alternative: PPP. SLIP is still used by some government
agencies and large corporations in UNIX remote-access applications, so you might come across it from time
to time.
PPP
PPP is the standard remote-access protocol in use today. PPP is actually a fam- ily of protocols that work together
to provide connection services.
Because PPP is an industry
standard, it offers interoperability between
different software vendors in various remote-access implementations. PPP
provides a number of security enhancements compared to regular SLIP, the most impor- tant being
the encryption of usernames and
passwords during the
authentication process. PPP allows remote clients and servers to
negotiate data encryption methods and authentication methods and support new
technologies. PPP even lets administrators choose which LAN protocol to use
over a remote link. For example, administrators can choose from among NetBIOS
Extended User Interface (NetBEUI), NWLink Internetwork Packet
Exchange/Sequenced Packet Exchange (IPX/SPX),
AppleTalk, or
TCP/IP.
During the establishment of a PPP connection between the remote system
and the server, the remote server
needs to authenticate the remote user. It
does so by using the PPP authentication protocols. PPP accommodates a number of
authentication protocols, and it’s possible
on many systems to configure more than one authentication protocol. The
protocol used in the authentication process depends on the security
configurations established between
the remote user and the server. PPP authentication protocols
include CHAP, MS-CHAP, MS-CHAP v2, EAP, and PAP.
Each of these authentication methods is discussed in the section “Remote
Authentication Protocols.”
PPPoE
Point-to-Point Protocol over Ethernet (PPPoE) is a protocol used to
connect multiple network users on an Ethernet local area network to a remote
site through a common device. For example, using PPPoE, it is possible to have
all users on a network share the same link, such as a DSL, cable modem, or
wire- less connection to the Internet. PPPoE is a combination of PPP and
the Ethernet protocol, which supports
multiple users in a local area network
(hence the name). The PPP information is encapsulated within an Ethernet frame.
With PPPoE, a number of different users can share the same physical
connec- tion to the Internet. In the process, PPPoE provides a way to keep
track of indi- vidual user Internet access times. Because PPPoE allows for
individual authen- ticated access to high-speed data networks, it is an efficient way to create
a sep- arate connection to a
remote server for each user. This
strategy allows Internet service providers (ISPs) or administrators to bill or
track access on a per-user basis rather than a per-site
basis.
Users accessing PPPoE connections require the same information as
required with standard dialup
phone accounts, including a username and password com- bination. As with a dialup PPP
service, an ISP will most likely automatically assign configuration information
such as the IP address, subnet mask, default gateway, and DNS server.
The PPPoE communication process has two stages—the discovery
stage and the PPP session stage. The discovery stage
uses four steps to establish the PPPoE connection: initiation, offer, request, and session confirmation. These steps rep- resent back-and-forth communication
between the client and the PPPoE serv- er. After
these steps have been negotiated, the PPP session can be established using
familiar PPP authentication protocols.
Remote-Control Protocols
CompTIA lists three protocols that are associated with remote-control
access: Remote Desktop Protocol (RDP), Virtual
Network Computing (VNC), and Citrix Independent Computing Architecture
(ICA). RDP is used in a Windows environment.
Terminal Services
provides a way for a client system to connect
to a server, such as Windows Server
2000/2003/2008, and, by using RDP, operate on the server as if they were
local client applications. Such a configuration is known as tftin client computing, whereby
client systems use the resources of the server instead
of their local processing power.
Windows Server products and Windows XP and Vista have built-in support for remote desktop connections. The
underlying protocol used to manage the con- nection is RDP. RDP is a low-bandwidth protocol used to send mouse move-
ments, keystrokes, and bitmap images of the screen on the server to the client computer. RDP does not actually send data
over the connection—only screen- shots and client keystrokes.
VNC consists of a client, a server, and
a communication protocol. It is a system whereby a remote user can access the
screen of another computer system. As with the other systems mentioned here,
VNC allows remote login, in which clients can access
their desktop while
being physically away from their
comput- er. VNC uses remote
frame buffer (RFB) protocol. RFB is the backbone allow- ing remote access to another system’s graphical
interface.
Finally, Citrix
ICA allows clients
to access and run applications on a server, using the server’s resources. Only the user interface, keystrokes, and
mouse move- ments are transferred between the client system and the server. In effect, even though you are
working at the remote computer, the
system functions as if you were actually sitting at the computer itself. As
with Terminal Services and RDP, ICA is an example of thin client computing.
Authentication, Authorization, and Accounting (AAA)
It is important to understand the difference between
authentication, authoriza-
tion, and accounting. Although these
terms are sometimes used interchangeably,
they refer to distinct steps that must be negotiated successfully to determine
whether a particular request for a resource
will result in that resource’s actually being returned.
Autftentication
refers to the mechanisms used to verify the identity of the com-
puter or user attempting to access a particular resource. Authentication is
usu- ally done with a set of credentials—most commonly a username
and password. More sophisticated identification methods can include the use of
. Smart cards
. Biometrics
. Voice recognition
. Fingerprints
Autftentication is a significant consideration for network
and system security.
Autftorization
determines if the person, previously identified and authenticated, is allowed access to a particular resource. This is commonly determined through group association. In other words,
a particular group
may have a specific level
of security clearance.
A bank transaction at an ATM is
another good example of authentication and authorization. When a bank card is
placed in the ATM, the magnetic strip
is read, making it apparent that someone is trying to access a particular account.
If the process ended
there and access
were granted, it would be a significant secu- rity problem, because anyone holding the card could gain
immediate access. To authenticate the
client, after the card is placed in the ATM,
a secret code or per- sonal identification number (PIN) is required.
This authentication ensures that it is the owner of the card who is trying to
gain access to the bank account.
With the correct code, the client is verified and authenticated, and
access is granted. Authorization addresses the specifics of which accounts
or features the user
is allowed to access after being authenticated, such as a checking or savings
account.
Accounting refers to the tracking
mechanisms used to keep a record of events on a system. One tool often used for this purpose
is auditing. Auditing is the process of monitoring occurrences and
keeping a log of what has occurred on a system. A system administrator
determines which events should be audited. Tracking
events and attempts
to access the system helps
prevent unauthorized access
and provides a record that administrators can analyze to make security
changes as necessary. It also provides administrators with solid evidence
if they need to look into improper user conduct.
The first step in auditing is to identify what system events to monitor. After the system events are identified, in a Windows
environment, the administrator can choose to monitor the success or failure of a system event.
For instance, if “logon” is the event being audited, the administrator might
choose to log all unsuccessful logon attempts, which might indicate that
someone is attempting to gain unauthorized access. Conversely, the administrator can choose to audit all successful
attempts to monitor when a particular user or user group is log- ging on. Some
administrators prefer to log both events. However,
overly ambi- tious audit policies
can reduce overall
system performance.
Passwords and Password Policies
Although biometrics and smart cards are becoming more common, they still
have a very long way to go before they attain the level of popularity that
user- name and password
combinations enjoy. Apart from the fact that usernames and passwords do not require any additional equipment, which
practically every other method of authentication does, the username and
password process is familiar to users, easy to implement, and relatively secure.
For that reason,
they are worthy of more detailed coverage than the other authentication
systems already discussed.
Passwords are a relatively simple form of authentication in that only a string of
characters can be used to authenticate the user. However,
how the string of char- acters is used and which policies
you can put in place
to govern them make user- names and passwords an excellent form of authentication.
Password Policies
All popular network operating systems include password policy systems
that allow the network administrator to control how passwords are used on the sys- tem. The exact capabilities vary
between network operating systems. However,
generally they allow the following:
. Minimum length of password: Shorter
passwords are easier to guess than longer ones. Setting a minimum password
length does not prevent a user from creating a longer password than the
minimum, although each network operating system has a limit on how long a password
can be
. Password expiration: Also. known as the maximum password age, pass- word
expiration defines how long the user can use the same password before having to
change it. A general practice is that a password be changed every 30 days. In
high-security environments, you might want
to make this value shorter, but you
should generally not make it any longer. Having
passwords expire periodically is an important feature, because it means that if
a password is compromised, the unauthorized user will not have access indefinitely..
. Prevention of password reuse:
Although a system might be able to cause a password
to expire and prompt the user to change it, many users are tempted to simply use the same
password again. A process by which the system remembers
the last, say, 10 passwords is most secure,
because it forces the user to create completely new passwords. This
feature is sometimes called enforcing password ftistory.
. Prevention of easy-to-guess passwords:
Some systems can evaluate the password provided by a user to determine whether
it meets a required level of complexity. This prevents users from having passwords
such as password, 12345678,
their name, or their nickname.
Password Strength
No matter how good a company’s password
policy, it is only as effective as
the passwords that are created within it. A password that is hard to guess, or strong, is more likely to
protect the data on a system than one that is easy to guess, or weak.
To understand the difference
between a strong password and a weak one, con- sider this: A password of six
characters that uses only numbers and letters and that is not case-sensitive has 10,314,424,798,490,535,546,171,949,056 possible combinations. That might seem
like a lot, but to a password-cracking program, it’s really not
much security. A password that uses eight
case-sensitive characters,
with letters, numbers, and special
characters, has so many possible
combinations that a standard
calculator can’t display the actual number.
There has always been debate over how long a password should be. It should be sufficiently long that it is hard to
break but sufficiently short that the user can easily remember it (and type
it). In a normal working environment, passwords of eight characters are
sufficient. Certainly, they should
be no fewer than six characters. In environments where security is a concern, passwords should be 10
characters or more.
Users should be encouraged to use a password that is considered strong.
A strong password has at least eight characters; has a combination of letters, num- bers, and special characters; uses mixed case; and does not form a proper word.
Examples are 3Ecc5T0h
and e1oXPn3r.
Such passwords might be secure, but users are likely
to have problems
remembering them. For that reason,
a popular strategy is to use
a combination of letters and numbers to form phrases or long words. Examples
include d1eTc0La
and tAb1eT0p.
These passwords might not be quite as secure as the preceding
examples, but they are still very strong and a whole lot better than the name of the user’s pet.
Kerberos Authentication
Kerberos is an Internet Engineering Task
Force (IETF) standard for providing authentication. It is an integral
part of network security. Networks,
including the Internet, can connect people from all over the world. When data
travels from one point to another across a network, it can be lost, stolen,
corrupted, or misused. Much of the data sent over networks is sensitive, whether
it is medical, financial, or otherwise. A key consideration for those
responsible for the net- work is maintaining the confidentiality of the data.
In the networking world, Kerberos plays a significant role in data confidentiality.
In a traditional authentication strategy,
a username and password are used to access network resources. In a secure
environment, it might
be necessary to pro-
vide a username and password combination to access each network service or
resource. For example, a user might be prompted to type in her username and
password when accessing a database, and again for the printer and again for
Internet access. This is a very time-consuming process, and it can also present a security risk. Each time the password
is entered, there
is a chance that someone will see it being entered. If the
password is sent over the network without encryption, it might be viewed by malicious eavesdroppers.
Kerberos
was designed to fix such problems by using a method requiring only a
single sign-on. This single sign-on allows a user to log into a system and
access multiple systems or resources without the need to re-enter the username
and password repeatedly. Additionally, Kerberos is
designed to have entities authen- ticate themselves by demonstrating possession of secret
information.
Kerberos is one part of a strategic security solution that provides secure
authen- tication services to users, applications, and network devices by
eliminating the insecurities caused by passwords being
stored or transmitted across the network. Kerberos is used primarily
to eliminate the possibility of a network
“eavesdrop- per” tapping into
data over the
network—particularly usernames and
passwords. Kerberos ensures data integrity and blocks tampering on the network.
It employs message privacy
(encryption) to ensure that messages are not visible to eaves- droppers on the network.
For
the network user, Kerberos eliminates the need to repeatedly demonstrate
possession of private or secret
information.
Kerberos is designed
to provide strong authentication for client/server applica- tions by using secret-key
cryptography. Cryptography is used to ensure that a client can prove its
identity to a server (and vice versa) across an insecure net- work connection.
After a client and server have used Kerberos to prove their identity, they can also encrypt all their
communications to ensure privacy and data integrity.
The key to understanding Kerberos is to understand the secret key
cryptogra- phy it uses. Kerberos uses symmetric
key cryptograpfty, in which both client and server use the same encryption key to cipher
and decipher data.
In secret key cryptography, a
plain-text message can be converted into cipher- text (encrypted data) and then
converted back into plain text using one key.
Thus, two devices share a secret key to encrypt and decrypt their
communica- tions.
Kerberos authentication works
by assigning a unique key (called a ticket) to each
client that successfully authenticates to a server.
The ticket is encrypted and contains the user’s password, which is used to verify the user’s identity when a particular network
service is requested. Each ticket is time-stamped. It expires after a period of time, and a new one is issued. Kerberos
works in the same way that you go to a movie. First, you go
to the ticket counter, tell the
person what movie you want to see, and get your ticket. After that, you go to a
turnstile and hand the ticket to someone else, and then you’re “in.” In
simplistic terms, that’s Kerberos.
Public Key Infrastructure
A Public Key Infrastructure (PKI) is a collection of software, standards,
and policies that are combined to allow users from the Internet or other
unsecured public networks to securely exchange data. PKI uses a public and
private cryp- tographic key pair that is obtained and shared through a trusted authority. Services and components work
together to develop the PKI. Some of the key components of a PKI include the following:
. Certificates: A form of electronic
credentials that validates users, com- puters, or devices on the network. A
certificate is a digitally signed state- ment that associates the credentials
of a public key to the identity of the person,
device, or service
that holds the corresponding private
key.
. Certificate authorities (CAs): CAs
issue and manage certificates. They validate the identity of a network device
or user requesting data. CAs can be either independent third parties, known as
a public CA, or they can be organizations
running their own certificate-issuing server software, known as private CAs.
. Certificate templates: Templates used to customize certificates issued by a Certificate Server. This customization includes
a set of rules and settings created on the CA and
used for incoming certificate requests.
. Certificate Revocation List
(CRL): A list of certificates that
were revoked before they reached the certificate expiration date. Certificates are often revoked due to security
concerns such as a compromised cer- tificate.
Public Keys and Private Keys
A cornerstone concept of the PKI infrastructure is public and private
keys. Recall from Figure 9.5 that symmetric key cryptography is a system in
which both client and server use the same encryption key to cipher
and decipher data. The term key is used for very good
reason—public and private keys are used to
lock (encrypt) and unlock (decrypt) data. These keys are actually
long numbers, making it next
to impossible for someone to access a particular key. When keys are used to secure data transmissions, the
computer generates two different types of keys:
. Public key: A nonsecret
key that forms half of a cryptographic key pair that is used with a public
key algorithm. The public key is freely
given to all potential receivers.
. Private key: The secret half of a cryptographic key pair that is used with
a public key algorithm. The private part of the public key cryptography
system is never
transmitted over a network.
Keys can be used in two different ways to secure
data communications:
. Public (asymmetric) key
encryption uses both a private and public key to encrypt and decrypt messages.
The public key is used to encrypt a mes- sage
or verify a signature, and the private
key is used to decrypt
the mes- sage or to sign a
document.
FIGURE 9.6 Public
(asymmetric) key encryption.
. Private
(symmetric) key encryption uses a single key for both encryption and
decryption. If a person possesses the key, he
or she can both encrypt and decrypt messages.
Unlike public keys, this single secret key cannot be shared with anyone except people who should be permitted to decrypt as well
as encrypt messages.
Where
Is PKI Used?
The following
list discusses areas in which PKI is normally used. Knowing what PKI is used for gives you a better idea of whether it is needed in a particular network.
. Web security: As you know, the Internet is an unsecured network.
PKI increases web security by offering server authentication, which enables
client systems to validate that the server
they are communicating with is indeed the
intended sever. Without this
information, it is possible for people to place themselves between the client
and the server and inter- cept client data by pretending to be the server. PKI also offers client
authentication, which validates the client’s
identity.
. Confidentiality: PKI provides secure data
transmissions using encryp- tion strategies between the
client and the server. In
application, PKI works with the Secure Socket Layer (SSL) protocol and the Transport Layer Security (TLS) protocol to
provide secure HTTP transfers, referred to as Hypertext Transport Protocol Secure (HTTPS). To take advantage of the SSL and TLS protocols, both the client
system and the server require certificates issued by a mutually trusted
certificate authori- ty (CA).
. Digital signatures: Digital
signatures are the electronic equivalent of a sealed envelope and are intended
to ensure that a file has not been altered in transit. Any file with a digital
signature is used to verify not only the publishers of the content or file, but
also to verify the content integrity at the time of download. On the network,
PKI allows you to issue certificates to internal developers/contractors and
allows any employee to verify the origin and integrity of downloaded applications.
. Secure email: Today’s organizations rely heavily on
email to provide external and internal communications. Some of the information
sent via email is not sensitive and does not need security, but for communications that contain sensitive
data, a method is needed to secure email content. PKI can be deployed as a
method for securing email transactions. In application, a private key is used
to digitally sign outgoing emails, and the sender’s
certificate is sent with the email so that the recipient of the email can verify the sender’s
signature.
RADIUS and TACACS+
Among the potential issues network administrators face when implementing
remote access are utilization and the load on the remote-access server. As a net- work’s remote-access implementation grows, reliance
on a single remote-access
server might be impossible, and additional servers might be required. RADIUS
can help in this scenario.
RADIUS functions as a client/server system. The remote user dials in to
the remote-access server, which acts
as a RADIUS client, or network access server (NAS), and connects to a RADIUS
server. The RADIUS server performs authentication, authorization, and auditing
(or accounting) functions and returns the information to the RADIUS client
(which is a remote-access server running RADIUS client software); the connection is either established or reject- ed based on
the information received.
Terminal Access Controller
Access Control System+ (TACACS+) is a security protocol designed to provide
centralized validation of users who are attempting to gain access to a router or
Network Access Server (NAS). Like RADIUS, TACACS+
is a set of security protocols designed to provide authentication,
authorization, and accounting (AAA) of remote users. TACACS uses TCP port 49
by default.
Although both RADIUS and TACACS+ offer
AAA services for remote users, some
noticeable differences exist:
. TACACS+ relies on TCP for connection-oriented delivery. RADIUS uses connectionless UDP for data delivery.
. RADIUS combines authentication and authorization, whereas TACACS+ can separate their
functions.
Remote Authentication Protocols
One of the most important decisions an administrator needs to make when
designing a remote-access strategy is the method
by which remote
users will be authenticated. Authentication is
simply the way in which the client and server negotiate a user’s credentials when the user tries to
gain access to the network. The exact protocol
used by an organization depends
on its security policies. The authentication methods may include
the following:
. Microsoft Challenge Handshake
Authentication Protocol (MS- CHAP): MS-CHAP is used to authenticate
remote Windows worksta- tions, providing the functionality to which LAN-based
users are accus- tomed while integrating the hashing algorithms used on Windows
net- works. MS-CHAP works with PPP, PPTP, and L2TP
network connec- tions. MS-CHAP
uses a challenge/response mechanism to keep the pass- word from being sent
during the authentication process. MS-CHAP uses the Message Digest 5 (MD5)
hashing algorithm and the Data
Encryption Standard (DES) encryption algorithm to generate the chal-
lenge and response. It provides mechanisms for reporting connection errors and for changing
the user’s password.
. Microsoft Challenge Handshake Authentication Protocol
version 2 (MS-CHAP v2): The second version of MS-CHAP brings with it
enhancements over its predecessor, MS-CHAP. These
enhancements include support for two-way authentication and a few changes in how the cryptographic key is analyzed. As far as authentication methods
are con- cerned, MS-CHAP
version 2 is the most secure. MS-CHAP works with PPP, PPTP, and L2TP
network connections.
. Extensible Authentication Protocol (EAP):
EAP is an extension of PPP that
supports authentication methods that go beyond the simple submission of a username
and password. EAP was developed
in response to an increasing demand for authentication methods that use other types of security devices such as token
cards, smart cards, and digital certifi- cates.
. Challenge Handshake Authentication Protocol
(CHAP): CHAP is a widely supported authentication method and works
much the same way
as MS-CHAP. A key difference between
the two is that CHAP supports non-Microsoft remote-access clients. CHAP allows
for authentication without actually having the user send his password over the network.
Because it’s an industry standard,
it allows Windows
Server 2003/2008 and Windows Vista to behave as a remote client to
almost any third- party PPP server.
. Password Authentication Protocol (PAP): Use PAP only if
necessary. PAP is a simple
authentication protocol in which the username and pass- word are sent to the remote-access server in clear text, making it possible for anyone listening to network
traffic to steal both. PAP typically
is used only when connecting to older UNIX-based remote-access servers
that do
not support any additional authentication protocols.
. Unauthenticated access: Users are allowed to log on without authenti- cation.
Choosing the correct
authentication protocol for remote clients
is an important part of designing a secure remote-access strategy. After they are authenticated, users
have access to the network and servers. It is recommended that adminis- trators
start with the most secure protocol, MS-CHAP v2, and step down the list.
Physical Security
Physical security is a combination of good sense
and procedure. The purpose of physical security is to restrict access
to network equipment to only people
who need it.
The extent to which physical security measures can be implemented to
protect network devices and data depends largely on their location. For
instance, if a server is installed in a cabinet
located in a general office
area, the only practical
physical protection is to make sure that the cabinet
door is locked
and that access to keys for the cabinet is
controlled. It might be practical to use other antitheft devices, but that depends
on the exact location of the cabinet.
On the other hand, if your server equipment is located in a cupboard or dedicat- ed room, access restrictions for the room are easier
to implement and can be more
effective. Again, access should be limited
to only those who need it. Depending
on the size of the room, this factor might introduce a number of other factors.
Servers and other key networking components are those to which you need
to apply the greatest level of physical security.
Nowadays, most organizations choose to locate
servers in a cupboard or a specific
room.
Server Room Access
Access to the server room should be tightly controlled, and all access doors must be secured by some method, whether it
is a lock and key or a retinal scanning system. Each method of server room
access control has certain characteristics. Whatever the method of server room access, it should follow one common prin-
ciple—control. Some access control methods
provide more control
than others.
Lock and Key
If access is controlled by lock and key,
the number of people with a key should be restricted to only those people
who need access.
Spare keys should
be stored in a safe location,
and access to them should be controlled.
Here are some of the features
of lock-and-key security:
. Inexpensive: Even a very good lock system
costs only a few hundred dollars.
. Easy to maintain: With no back-end systems
and no configuration, using
a lock and key is the easiest
access control method.
. Less control than other methods: Keys
can be lost, copied, and loaned to other people. There is no record of access
to the server room and no way of proving that the key holder is entitled to enter.
Swipe Card and PIN
Access
If budgets and policies permit, swipe card and PIN entry systems are good
choices for managing physical access to a server room. Swipe card systems use a
credit-card-sized plastic card that is read by a reader on the outside of the door. To enter the server room, you must swipe
the card (run it through the reader), at which point it is read by the reader, which validates it. Usually, the swipe card’s use to enter the room is logged by the card system, making it possible
for the logs to be checked. In higher-security installations, it is
common to have a swipe card reader on the inside of the room as well so that a person’s exit can be recorded.
Although swipe card systems have relatively few disadvantages, they do
need specialized equipment so that they can be coded with users’ information.
They also have the same drawbacks as keys in that they can be lost or loaned to other
people. Of course, the advantage that swipe cards
have over key systems is that
swipe cards are very hard to copy.
PIN pads can be used alone or with a swipe card system. PIN pads have the
advantage of not needing any kind of card or key that can be lost. For the
budg- et conscious, PIN pad systems
that do not have any logging or monitoring capa- bility can be purchased for a
reasonable price. Here are some of the character- istics of swipe card and PIN pad systems:
.
Moderately
expensive: Some systems, particularly those with
manage- ment capabilities, are
quite expensive.
. Enhanced
controls and logging: Each time someone enters the server room, he
or she must key in a number or use a swipe card. This process enables systems
to log who enters and when.
. Some
additional knowledge required: Swipe card systems
need special software and
hardware that can configure the cards. Someone has to learn how to do this.
Biometrics
Although they might still seem like the realm of James Bond, biometric
securi- ty systems are becoming far more common.
Biometric systems work by utilizing some unique characteristic of a
person’s identity—such as a
fingerprint, a palm print, or even a retina scan—to validate
that person’s identity.
Although the price of biometric
systems has been falling over recent years, they
are not widely deployed in small to midsized networks. Not only are the systems themselves expensive, but their
installation, configuration, and maintenance
must be considered. Here are some of the characteristics of biometric
access control systems:
. Very effective: Because each person entering
the room must supply
proof-of-person evidence, verification of the person entering the server area is as close
to 100% reliable as you can
get.
. Nothing to lose: Because there are no cards or keys, nothing
can be lost.
. Expensive: Biometric security
systems and their attendant scanners
and software are still
relatively expensive and can be afforded only by organi- zations that have a larger
budget, although prices are sure to drop as more people turn to this method of
access control.
Secured Versus Unsecured Protocols
As you know, any network needs
a number of protocols in order to function. This includes both LAN and WAN protocols. Not all protocols are
created the same. Some are designed for secure transfer, and others are not. Table
9.1 lists several protocols and describes their
use.
Table 9.1
|
Protocol Summary
|
|
Protocol
|
Name
|
Description
|
FTP
|
File Transfer Protocol
|
A protocol for
uploading and downloading files to and from
a remote host.
Also accommodates basic file management
tasks.
|
SFTP
|
Secure
File Transfer Protocol
|
A
protocol for securely uploading and downloading files
to and from
a remote host. Based on SSH
security.
|
HTTP
|
Hypertext
Transfer Protocol
|
A protocol for
retrieving files from
a web server. Data is sent in
clear text.
|
HTTPS
|
Hypertext Transfer Protocol Secure
|
A secure protocol for retrieving files
from a web server. HTTPS
uses SSL to encrypt data between
the client and host.
|
Telnet
|
Telnet
|
Allows sessions to be opened
on a remote host.
|
SSH
|
Secure Shell
|
A secure alternative to Telnet
that allows secure sessions to
be opened on
a remote host.
|
RSH
|
A UNIX utility used
to run a command on a remote machine
|
Replaced by SSH because RSH sends all data in
clear text.
|
Table 9.1 Protocol Summary Continued
SCP Secure Copy Protocol Allows files to be copied securely
between two
systems. Uses Secure Shell
(SSH) technology to provide encryption services.
RCP Remote Copy Protocol Copies files between systems, but transport is
not secured.
SNMPv1/2 Simple Network A network monitoring system used Management Protocol to monitor
the network’s condition. Both versions 1 and 2 SNMPv1 and v2 are not secured.
SNMPv3 Simple
Network An enhanced SNMP service offering both Management Protocol encryption and authentication services. version 3
Managing Common Security Threats
Malicious software, or malware, is a serious
problem in today’s computing envi- ronments. It is often assumed that malware is composed of
viruses. Although this typically is true, many other forms
of malware by definition are not viruses, but are equally undesirable.
Malware encompasses many different types
of malicious software:
. Viruses:
Software programs or code that are loaded onto a computer without the
user’s knowledge. After it is loaded,
the virus performs
some form of undesirable action on the
computer.
. Macro viruses: Although they are
still a form of virus, macro viruses are specifically designed to damage
office or text documents.
. Worms: Worms
are a nasty form of software that propagate automatical- ly and silently
without modifying software or alerting the user.
After they are inside a system, they can carry out their intended harm,
whether it is to damage data or relay
sensitive information.
. Trojan horses: Trojan
horses appear as helpful or harmless programs but, when installed, carry
and deliver a malicious payload.
A Trojan horse virus might,
for example, appear to be a harmless or free online game, but, when activated, is actually malware.
. Spyware: Spyware covertly
gathers system information through the user’s
Internet connection without his or her knowledge, usually for
advertising purposes. Spyware
applications typically are bundled as a hid- den component of freeware or
shareware programs that can be down- loaded from the Internet.
More on Viruses
Viruses and their effects are well documented and are feared by users and
administrators alike. The damage from viruses varies greatly,
from disabling an entire network to damaging applications
on a single system. Regardless of the impact,
viruses can be destructive, causing
irreplaceable data loss and consum- ing hours of productivity.
As mentioned, not all the malware encountered is by definition a virus. To be considered a virus, the
malware must possess
the following characteristics:
. It must be able to replicate itself.
. It requires a host program
as a carrier.
. It must be activated or executed in order to run.
Many different types of viruses
exist:
. Resident virus: A resident virus installs itself
into the operating system and stays there.
It typically places
itself in memory
and from there
infects and does damage.
The resident loads with the operating system on boot.
. Variant virus: Like any other applications,
from time to time viruses are enhanced to make them harder to detect and to
modify the damage they do. Modifications to existing viruses are called
variants because they are rereleased versions of known viruses.
. Polymorphic virus: One particularly hard-to-handle type of virus is the
polymorphic. It can change its characteristics to avoid detection.
Polymorphic
viruses are some of the most difficult types to detect and remove.
. Overwriting/nonoverwriting virus: Viruses can be designed to over- write
files or code and replace them with modified data. In many cases the
application can function as normal so that the user does not know the program
has been modified. Nonoverwriting viruses amend
an appli- cation by adding
files or code.
. Stealth virus: A stealth virus can hide itself
to avoid detection. Such viruses often fool detection programs
by appearing as legitimate pro- grams or hiding within
legitimate programs.
. Macro virus: Macro viruses
are designed to infect and
corrupt docu- ments.
Because documents are commonly shared, these viruses can spread at an alarming
rate.
More on Worms and Trojan Horses
Trojan horses, as the name
implies, are about hiding. Trojan horses
come hid- den in other programs. For example, a Trojan horse can be hidden in a share- ware game. The game looks harmless,
but when it is downloaded and executed, the Trojan is operating in the background, corrupting and damaging
the system.
Trojan horses are different
from viruses because they do not replicate them- selves and do not require a
host program to run. They are commonly found on P2P sharing networks where interesting and helpful-looking programs
are actu- ally disguised Trojan horses. Trojan horses are also spread when programs are shared using
email communications or removable media.
In the past, many exe- cutable jokes sent through email,
such as cartoons and amusing games, were in fact the front end of a Trojan horse.
Worms are different and have the potential to spread faster
than any other
form of malware. Worms can be
differentiated from viruses. Although they can repli- cate, they do not require
a host and do not require user intervention to propa- gate. Worms can spread at an alarming rate
because they often exploit security holes in applications or operating systems.
As soon as a security hole is found, worms automatically begin
to replicate, looking
for new hosts
with the same vul-
nerability. Worms look for an Internet connection and then use that
connection to replicate without any user intervention. Table 9.2 describes the differences
between worms, Trojan horses, and viruses.
|
Table 9.2 Comparing Malware Types
Virus
|
Can self-replicate.
|
Requires a host
program to propagate.
|
Needs to be activated
or executed by a user.
|
Trojan horse
|
Does
not replicate itself.
|
Does not require a host
program.
|
The user
must execute the program in which the Trojan horse
is hidden.
|
Worm
|
Self-replicates without user intervention.
|
Self-contained and does not require
a host.
|
Replicates and activates without requiring user intervention.
|
Denial of Service and
Distributed Denial of Service Attacks
Denial of service (DoS) attacks are designed to tie up network bandwidth
and resources and eventually bring the entire network to a halt. This type of
attack is done simply by flooding
a network with more traffic
than it can handle. A DoS
attack is not designed to steal data but rather to cripple
a network and, in doing so, cost a company huge amounts of dollars.
The effects of DoS attacks
include the following:
. Saturating network resources, which then renders
those services unusable.
. Flooding the
network media, preventing communication between com- puters on the network.
.
User downtime because of an inability to access required services.
. Potentially huge financial losses
for an organization due to network and service downtime.
Types of Denial of Service
Attacks
Several different types of DoS attacks exist, and each seems to target a different
area. For instance, they might target
bandwidth, network service, memory, CPU, or hard drive
space. When a server or other system
is overrun by mali-
cious requests,
one or more of these core resources breaks down, causing the system to crash or stop responding.
. Fraggle: In a Fraggle attack,
spoofed UDP packets
are sent to a net- work’s broadcast address.
These packets are directed to specific ports, such as port 7 or port 19, and, after they are connected, can flood the system.
. Smurf: The Smurf attack is similar to
a Fraggle attack. However, a ping
request is sent to a broadcast network
address, with the sending address spoofed so that many ping replies
overload the victim and prevent it from processing the replies.
. Ping of death: With this attack, an oversized ICMP datagram is used to crash IP devices that were manufactured before 1996.
. SYN flood: In a typical TCP session, communication between two com- puters is initially established by a three-way handshake, referred to as a SYN, SYN/ACK, ACK. At the start of a
session, the client sends a SYN message
to the server. The server acknowledges the request by sending a SYN/ACK
message back to the client. The connection is established when the
client responds with an ACK message.
In a SYN attack,
the victim is overwhelmed with a flood of SYN packets. Every SYN packet forces
the targeted server to produce a SYN-ACK response and then wait for the ACK
acknowledgment. However, the attacker doesn’t
respond with an ACK or spoofs its destination IP address with a nonexistent address so that no
ACK response occurs. The result is that the server begins filling
up with half-open connections.
When all the server’s available resources are tied up
on half-open con- nections, it stops acknowledging new incoming SYN requests,
including legitimate ones.
. ICMP flood: An ICMP flood, also known
as a ping flood, is a denial of service attack in which large numbers of ICMP
messages are sent to a computer system to overwhelm it. The result is a failure
of the TCP/IP protocol stack, which cannot tend to other TCP/IP requests.
Other Common Attacks
The following are some of the more
common attacks used today:
. Password attacks: Password attacks
are one of the most
common types of
attacks. Typically, usernames are
easy to obtain. Matching the user-
name with the
password allows the intruder to gain system access to the level associated with
that particular user. This access is
why it is vital to protect administrator passwords. Obtaining a password
with administra- tor privileges provides the intruder
with unrestricted access to the system
or network.
. Social engineering: Social
engineering is a common form of cracking. It can be used by both outsiders and
people within an organization. Social engineering is a hacker term for tricking
people into revealing their pass- word or some form of security information. It
might include trying to get users to
send passwords or other information over email, shoulder surfing, or any other
method that tricks users into divulging information. Social engineering is an
attack that attempts to take advantage of human behavior.
. Eavesdropping: As the name implies,
eavesdropping involves an intrud- er who obtains
sensitive information such as passwords, data, and proce- dures for performing functions by
intercepting, listening to, and analyz- ing network communications. It is
possible for an intruder to eavesdrop by wiretapping, using radio, or using
auxiliary ports on terminals. It is also possible to eavesdrop using software that monitors packets
sent over the network. In most
cases, it is difficult to detect eavesdropping, making it important to ensure
that sensitive data is not sent over the network in clear text.
. Back door attacks: In a back door
attack, an attacker gains access to a computer
or program by bypassing standard
security mechanisms. For instance, a programmer might install a
back door so that the program can be accessed for troubleshooting or other
purposes. Sometimes, as discussed earlier,
nonessential services are installed by default, and it is possible to gain access using one of these unused services.
. Man-in-the-middle attack: In a
man-in-the-middle attack, the intruder places himself between the sending and
receiving devices and captures the communication as it passes by. The interception of the data is invisi-
ble to those actually sending and receiving the data. The intruder can capture
the network data and manipulate it, change it, examine it, and then send it on.
Wireless communications are particularly susceptible to this type of attack. A
rogue access point is an example of a man-in-the- middle attack.
. Spoofing:
Spoofing is a technique in which the real source of a transmis- sion, file, or
email is concealed or replaced with a fake source. This tech- nique enables
an attacker, for example,
to misrepresent the original
source of a file
available for download. Then he can trick users into accepting a file from an
untrusted source, believing it is coming from a trusted source.
. Rogue access points: A
rogue access point describes a situation in
which a wireless access point has been placed on a network without the administrator’s knowledge. The result is
that it is possible to remotely access the rogue
access point, because
it likely does not adhere
to compa- ny security
policies. So all security can be compromised by a cheap wire- less router
placed on the corporate network.
. Phishing: Often users
receive a variety of emails offering products, serv- ices, information, or
opportunities. Unsolicited email of this type
is called pftisfting (pronounced
“fishing”). This technique involves a bogus offer that is sent to hundreds of thousands or even millions
of email addresses. The strategy plays the odds.
For every 1,000 emails sent, per- haps one person replies. Phishing can be
dangerous, because users can be tricked into divulging personal information
such as credit card numbers or bank account information.
An Ounce of Prevention
The threat from malicious code is a very real concern. It is important to
take precautions to protect our systems. Although it might not be possible to
elimi- nate the threat, you can significantly reduce it.
One of the primary tools used in the fight against malicious
software is antivirus
software. Antivirus software is available from a number of
companies, and each offers similar features and capabilities. The following is
a list of the common features and
characteristics of antivirus software:
. Real-time protection: An installed
antivirus program should continu- ously monitor the system, looking for
viruses. If a program is down- loaded, an application opened, or a suspicious
email received, the real- time virus monitor
detects and removes
the threat. The virus application sits in the background, largely unnoticed by the user.
. Virus scanning: An antivirus program
must be able to scan selected drives and disks, either locally or remotely.
Scanning can be run manual- ly or can be scheduled to run at a particular time.
. Scheduling: It is a best
practice to schedule virus scanning to occur automatically at a predetermined time. In a network environment, this typically is off hours, when the overhead of the scanning
process won’t impact users.
. Live updates: New viruses
and malicious software
are released with alarming frequency. It is recommended that the antivirus
software be configured to receive virus updates regularly.
. Email vetting: Emails represent one
of the primary sources of virus delivery. It is essential to use antivirus software that provides
email scan- ning for both
inbound and outbound email.
. Centralized management: If used in a
network environment, it is a good idea to use software that supports managing
the virus program from the server. Virus updates
and configurations only need to be made on the server, not on each individual client station.
Managing the threat from viruses is considered a proactive measure, with
antivirus software being only part of the solution. A complete virus protection
strategy requires many aspects to help limit
the risk of viruses:
. Develop in-house policies and rules:
In a corporate environment or even a small office, it is important to establish
what information can be placed on a system. For example, should users be able
to download pro- grams from the Internet? Can users bring in their own storage
media, such as USB flash drives?
. Monitoring virus threats: With new
viruses coming out all the time, it is important to check whether
new viruses have been released
and what they are designed to do.
. Educate users: One of the keys to a
complete antivirus solution is to train users in virus prevention and
recognition techniques. If users know what they are looking for, they can prevent a virus from entering
the sys- tem or network. Back up copies of important documents. It should be
mentioned that no solution is absolute, so care should be taken to ensure that
the data is backed up. In the event of a malicious attack, redundant
information is available in a secure location.
. Automate virus scanning and updates:
Today’s antivirus software can be configured to scan and update itself
automatically. Because such tasks
can be forgotten and overlooked, it is recommended that you have these
processes scheduled to run at predetermined times.
. Patches and updates: All applications, including productivity software, virus checkers, and especially the operating system,
release patches and updates often designed to address potential security weaknesses.
Administrators must keep an eye out for these patches and install them when
they are released.
'She's not entitled to any of it': Daughter accused in lawsuit of stealing $1M lotto jackpot says ticket was hers, mom 'not well' Security Guards In New York
ReplyDelete