Tuesday, June 28, 2011

New malware hides in the PC's Master Boot Record, fools cleaning attempts

Reposted article by Gregg Keizer / June 27, 2011 01:25 PM ET

Computerworld - Microsoft is telling Windows users that they'll have to reinstall the operating system if they get infected with a new rootkit that hides in the machine's boot sector.

A new variant of a Trojan Microsoft calls "Popureb" digs so deeply into the system that the only way to eradicate it is to return Windows to its out-of-the-box configuration, Chun Feng, an engineer with the Microsoft Malware Protection Center (MMPC), said last week on the group's blog.

"If your system does get infected with Trojan:Win32/Popureb.E, we advise you to fix the MBR and then use a recovery CD to restore your system to a pre-infected state," said Feng.

A recovery disc returns Windows to its factory settings.

Malware like Popureb overwrites the hard drive's master boot record (MBR), the first sector -- sector 0 -- where code is stored to bootstrap the operating system after the computer's BIOS does its start-up checks. Because it hides on the MBR, the rootkit is effectively invisible to both the operating system and security software.

According to Feng, Popureb detects write operations aimed at the MBR -- operations designed to scrub the MBR or other disk sectors containing attack code -- and then swaps out the write operation with a read operation.

Although the operation will seem to succeed, the new data is not actually written to the disk. In other words, the cleaning process will have failed.
Feng provided links to MBR-fixing instructions for XP, Vista and Windows 7
Rootkits are often planted by attackers to hide follow-on malware, such as banking password-stealing Trojans. They're not a new phenomenon on Windows.

In early 2010, for example, Microsoft contended with a rootkit dubbed "Alureon" that infected Windows XP systems and crippled machines after a Microsoft security update.

At the time, Microsoft's advice was similar to what Feng is now offering for Popureb.

"If customers cannot confirm removal of the Alureon rootkit using their chosen anti-virus/anti-malware software, the most secure recommendation is for the owner of the system to back up important files and completely restore the system from a cleanly formatted disk," said Mike Reavey, director of the Microsoft Security Response Center (MSRC), in February 2010.

Since then, Microsoft has added a check for the Aluereon rootkit to all security updates so that when the malware is detected, the updates are not installed.

Gregg Keizer covers Microsoft, security issues, Apple, Web browsers and general technology breaking news for Computerworld. Follow Gregg on Twitter at Twitter @gkeizer or subscribe to Gregg's RSS feed Keizer RSS. His e-mail address is gkeizer@computerworld.com.

Friday, March 04, 2011

Our National Oil Crisis In A Nutshell

 A lot of  folks can't understand how we came to  have an oil shortagehere in our  country.
~~~ 
Well, there's a very simple  answer. 
~~~ 
Nobody bothered to check the oil. 
~~~ 
We  just didn't know we were getting low. 
~~~ 
The reason for that  is purely geographical. 
~~~ 
Our OIL is located  in: 
~~~ 
ALASKA 
~~~ 
California 
~~~ 
Coastal  Florida 
~~~ 
Coastal Louisiana 
~~~ 
North  Dakota 
~~~ 
Wyoming 
~~~ 
Colorado 
~~~ 
Kansas 
~~~ 
Oklahoma 
~~~ 
Pennsylvania 
And 
Texas 
~~~ 

Our dipsticks are located in DC

Tuesday, February 15, 2011

Ways to Guarantee Network Damage

Do you ever see "techies" walking around your office with a lanyard around their neck and a USB dongle hanging from the end of the string?  Do you allow employees to insert USB devices, CDROMs or DVDs brought from home into their computers at work?

If you do, it's likely that files from your network will will eventually be copied / moved to one of the external USB drives.  Or you'll discover that you computers are infected with viruses, worms or infamous rootkits.

I downloaded an article today that everyone needs to read.  Click here to download and read it for yourself.

You can call or email us (johnfair@jfsi.com) or 336-293-7757 if you have questions after reading this material.

Wednesday, January 26, 2011

You've Been Hacked - What Do You Do?

Do you have a plan if youdiscover that your network has been breached by a hacker with malicious intent?  I ran into the following article this morning and thought it was worth sharing with you.
----------------------------------------------------

You've Been Hacked!

January 24, 2011 —
 

A co-worker and I watched a few weeks ago at around 11pm, as our primary server was jacked. Yes, we were smart, backed up, and had done many of the details required to secure a server against attacks. Nevertheless, we watched as the server was breached, the passwords were changed in a heartbeat, and then it became some sort of media server. A software service/daemon in the box had been pounded by a blinding attack until it simply died, leaving root access in its wake. We tried as we could to shut down the service and the access, but we were too slow, and the injected script took over the server. Normally we wouldn’t be watching this server at all, and it was by a fluke of circumstances that it was being worked on at 11pm.
[ See also: Beyond malware removal: How to respond to a compromised system alert and Survival tips for the hack: Recovery plans]
Gone were email, and our WWW server. There were no archives stored on these public-facing servers, although it was fully backed up. And we were now dead in the water. In another cabinet at our ISP, the other servers were untouched. No one was hurt. Within an hour, by remote control, we’d taken down the hijacked server, and reestablished ourselves online. But we’re small, and our needs are small. We don’t live on web transactions, and we know that others do.
The experience, however, reminded us of how we’ve watched organizations succeed, flounder, or fail when a hack attempt becomes successful. We’ve put together a list of survival tips to help ensure more successful outcomes.
To make assets productive, they must be secured and made productive again. There must be a plan available, but the plan is often based on an initial assessment.
If you’ve been hacked, here are your next steps:
Assess damage and remove services
Discover what’s not working. Each service that’s not working likely has dependencies or is dependent on other resources. Those resources may or may not be compromised. Having a list of application and service process interdependencies handy allows the individual making the assessment to determine subsequent steps.
Having an important service go down is a symptom. The components might be compromised, but they might have died a ‘natural’ death. Hardware failures, while not as common as in the past, still dog production systems. Software can also blow up of its own accord. Or, as in our case, they can be taken down by malevolent action.
The assessment phase determines whether assets have been damaged or compromised. Damaged assets must be restarted elsewhere, while compromised assets may still be infected or vulnerable to a subsequent attack.
Compromised assets may need to be powered off, routed around, or otherwise removed from accessibility, so as not to spread damage. Hijacked machines may immediately try to jack other machines or probe -- sometimes silently -- for other hardware to infect after probing network resources.
You may need to power down the equipment prior to a forensic assessment. If remote power-off isn’t available, one can contact a DNS administrator having authority over the specific DNS functional entry for the server(s) that need to be routed around. DNS entry change doesn’t take all of the hits away from a server, as users and processes will have a cache (or even hosts table entry) that still go to the IP address rather than the canonical name of the compromised server(s). You’ll need to alter DNS once again once services are ready to be restored.
You can turn off mail almost immediately by removing the DNS MX record. Servers will provide bounce messages to mail senders. The record can be re-established once mail services are tested and ready for business.
In our case, we were unable to power down the server remotely, so we asked our ISP through their DNS trouble ticketing system to move our WWW and mail records to another address. Part of our rational was that our mail server was still accepting mail, even though we couldn’t access the IMAP or POP3 services to get the mail as the password table was altered.
The nature of an attack also gives clues as to the integrity of the backups. Injection attacks, where data is crammed into an application until the application is overloaded and ‘explodes’ is a real-time attack, generally taken on when applications are poorly written or application infrastructure isn’t updated/patched. A subsequent restoral of this type of unpatched application makes the site vulnerable again if it’s restored. Sometimes a site will be immediately attacked on restoration with the same successful attack, until the base files are patched or the applications modified to withstand data injection.
Rapid assessment and potential forensic analysis are therefore critical to determining what happens next in terms of revelation, and bringing assets back online.
Alert your legal department
Depending on your state laws, regulatory compliance needs, or business partner needs, there may be a need to assess privacy and consumer information breaches for reporting purposes. Data and components may need to be isolated for further determination of what, if anything, the breach has exposed, and for how long.
A typical procedure is to power down and freeze the state of compromised devices so that they can be examined by forensic professionals. In turn, the examiners usually work with legal departments to achieve the goal of complying with legal and regulatory directives.
You may need to take forensic steps to establish the size of the breach, how long privacy or data was compromised or visible, and depending on the jurisdiction involved, make available information to regulatory or other authorities as legal compliance mandates. Your legal department will know the steps to take, and what must be done in terms of compliance regarding the possible breach of information.
The amount of time that IT has to perform this reporting varies, but sooner is usually better. We also advise that internal and external PR departments must be in on the reporting process, as an onslaught of questions often follows privacy breaches, and the inquiries will need to be handled both through normal email, but possibly social networks as well.
Restore assets to production
Each organization restores damaged services differently, if methodically. Once the state of processes, communications infrastructure, and assets have been determined, with or without the need for freezing of those assets for subsequent forensic analysis, what’s left over is to bring online, and get processes working again.
Production - especially money-making processes - are brought online first, after testing checks where required. Often, services from the web have been established in terms of priority. If you don’t make money or provide customer services through the web, email is usually the top service required.
In our case, our production WWW was based on Apache and a rudimentary CMS called GeekLog. Mail was provided by sendmail and postfix, with an IMAP application used to access mail via web. We didn’t do financial transactions over our web server, and the pages served were essentially static, with comparatively low hit rates for the pages.
We replaced their functionality with web appliances from TurnkeyLinux.org, which consist of an Ubuntu Linux-based Linux distribution, and an appliance payload. The payloads in our case were Zimbra, the email system, and Apache with WordPress on top (that allowed importation of our old GeekLog files).
Others may be restored from backups. The backups may need to be examined for evidence of malware or unauthorized access. Backups indeed may be compromised in the same way as the state of the servers when they were taken down. Sometimes services are interrupted via injection attacks, clobbering services, or other instant take-downs. Some servers were killed because they’d been infected for a long time before someone got around to attacking them and destroying their integrity.
Document what you did
Maybe you had a plan, maybe not. In either case, documenting what you did to go through the processes of analysis, compliance, and restoration will be needed again. We hope not, but we know better. Are high availability services, such as mirrored or fail-over servers worth the cost? Are there better ways to protect production services from disaster? What are the lessons learned, and what can be done better next time to shorten outages, or improve services?
It happens fast, and most often by surprise. Some are prepared and have the ability to restore production systems quickly, while others are in scramble mode. Planning takes most of the burn from a system take-down, along with practiced methodical restoral drills.
What keeps many IT engineers up at night is thinking about what new inventive ways will be found to crack open servers. Many patch and fix applications do their best to keep services in constant current revision, so that zero-day cracks won’t take them down. They’ll hack themselves, looking for ways to clobber services before the bad guys do. Then it happens: your work becomes cut out for you.
Survival tips for the hack: Recovery plans
  • Have a printed list, and a distributed text file that can be stored on notebooks, phones, and administratively accessed areas that have key contact email addresses, mobile phone numbers, their roles in recovery, their availability or vacation times, and an alternate contact for each. The contact names and email addresses of everyone in the data delivery chain must be on the list. For those who are nervous: encrypt the list with a password you'll remember in a 3m3rg3nCy.
  • Someone must have the designated cell/mobile phone that can be tethered as a datalink to the Internet when Internet connectivity is interrupted. That person must be known, available, has the right cable for tethering their phone along with the right software that does the job - and be accessible. An uninterruptable power supply must be running key workstations, and those supplies must have charged batteries with spares nearby.
  • Have a location map of where physical servers, routers, and other equipment actually live or perhaps lay dead. Network cabling maps are handy, as are cross-connect diagrams for servers, SAN components, IOS or other 'back-channel' cabling.
  • Know who to contact to reboot equipment physically by power cycling or by punching the reset button which is located where on a server or peripheral/router's front panel, in the dark, hanging upside down, feeling your way through a maze. This is especially important if your servers are located off-premises at an ISP or MSP site; rebooting at 4am is often contingent on having an available finger to press the button.
  • Have a list of public safety contacts in case Halon or other fire control mechanisms have discharged. This list must cover all sites, including contractor/MSP and ISP sites.
  • Know your applications, versions, and dependencies. Your apps depend on other apps, as well as configuration settings (registry and 'conf' files) that vary uniquely with each instance. Configuration management tools know that your servers and/or website must have certain patch/fix levels, and various packages installed and ready to go. Keep a list printed someplace where it can easily be found. Update this list every Wednesday -- following "Patch Tuesday". Keep it in secure storage.
  • Have a public relations and legal process in place to deal with post-breach issues. PR professionals must work together with legal departments to determine the best posture that an organization must undertake to both legally comply, and tell stockholders, stakeholders, customers, and the general public about a breach and its ramifications.
  • If you use a hot-site or availability site, be prepared to test that side with random fail-overs to the backup site -- your version of the data fire drill. Never spend a dime on hot-sites, availability, or mirror sites unless you're fully prepared to test fail in this way.
  • Prepare for the worst. Standard emergency equipment includes a working radio with batteries, a shovel, two non-flammable blankets, a dozen 'power bars', distilled or drinking water in gallon containers (keep water away from critical equipment), several charged UPSs for phone and other device charging, separate set of access keys, charged LED-type flashlights, indelible ink pens and paper, thick-mil trash bags, toolkit, floor-puller, crowbar, above-mentioned printed lists,

Friday, January 14, 2011

Personalizing The AZ Tragedy

Recent events in Arizona have indeed been an American tragedy.  A crazy man looked for his day in the sun and created a situatoin that has effectively brought government business to a halt.  It's diverted the news media away from everything except those events.  It's created a situation where vilification of everyone who is politically right of center is responsible for the crazy man's actions.

I saw an image in our local newspaper last night.  I was intrigued enough by the image to write this little post this morning just so I could pass the image on to those of you who visit this site. 

The image typifies my feelings with respect to the Arizona actions.  Here it is.


My heart is heavy for those who lost lives or loved ones.  It's also heavy for Rev. Gifford who's life was forever changed by the horrible events. 

The actions of the crazy man were NOT the result of Fox News, Rush Limbaugh or Glenn Beck.  Fort Hood was a tragedy; but nobody pushed to shut down talk radio or blame people who were politically different. 

It's time to put sanity back in government! 

Wednesday, January 12, 2011

Don't Let Snow And Ice Stop You From Working

I don't know know how it is where you live and work; but winter 2011 has been about as rough as any I've seen lately in North Carolina.  We've had a couple of pretty big snow storms.  Roads, parking lots and driveways are especially slick right now. 

My wife INSISTED on going to work this morning -- probably because she could not tolerate me for another day.  She just called and told me that roads are passable (excepting some patches of black ice).  She tacked on information about sidewalks and parking lots, saying that they were trecherous sheets of ice.  She DARED me to leave my office today because of risk of falling.

That being the case, I'm especially glad that I have tools that allow me to continue to be effective and to work with clients who have problems with their computers.  Let me tell you what I recommend.

Remote Desktop.  If you're ever away from your office and need access to your home or office desktop -- not an issue.  Two products come to mind -- and every laptop needs one of these installed.  GoToMyPC.COM is the perfect place to go to see Citrix's offering to allow you to remotely control your desktop computer(s).  It's not free; but it's a lot less expensive than productivity losses as a result of weather related issues.  Another product of this type is from LogMeIn.com.  While their full-function product is about the same price as GoToMyPC, they actually have a free, limited function, version that may be good enough for casual (and occasional) users.

Remote Assistance.  Rarely does a day go by without a telephone call from a user who thinks he or she is "in trouble" -- their computer isn't working or something similar.  Citrix has MY answer to this -- it's GoToAssist -- a program that allows me to remotely run a user's computer.  It's also a great training tool for me, because I can walk a client through an issue or demonstrate how do solve a client's problem.

Call me at 336-293-7757 if you have questions about this group of programs or need information on any of your immediate IT issues.

Thursday, January 06, 2011

Correcting an error

I was just chatting with a good friend who started the conversation with "...I understand that you have retired." 

I reminded her that I know I'm old, fat, dumb and ugly -- but the word "retire" is not yet part of my vocabulary.Though I have a lot more aches and pains than I had even a year or two ago, my definition of "retire" is to be able to take an occasional afternoon off without feeling guilty. 

I'm still very much active with JFSI and intend to continue active as long as I possibly can -- hopefully until the day I die.My friend could not remember who told her; so I thought I'd just correct the records for those of you who read this blog.