Posted by Mark Brousseau
Tis the season to be jolly – and to leave sensitive corporate information behind at the airport!
According to telephone interviews with the lost property offices of 15 UK airports, including Heathrow and Luton, over 5,100 mobile phones and 3,844 laptops have been left behind so far this year; with the majority still unclaimed and many more expected to be left over the Christmas holiday peak season. This figure is likely to be just the tip of the iceberg as ABTA expect over 4 million people to be travelling over this period, and the overall figures do not take into account all those devices that were stolen, or kept by the ‘lucky’ finder.
The survey, carried out by Credant Technologies, also found that in the majority of cases, those devices that aren’t reclaimed are then either sold at auction or donated to charities. However the fact is that these devices may still contain information that could be available for the new owner. With ID theft from mobile phones and other lost devices at an all time high, users should really take special care this Christmas when travelling.
According to a representative at Luton Airport, the most common place devices are forgotten is at the security check point as it’s a very pressured environment with numerous distractions. Often, once the travelers have boarded the plane and left the country it’s just too expensive to return for the device, which in most instances will be covered by insurance, resulting in the majority going unclaimed.
But the device’s value is the last thing organisations should be worrying about, explains Seán Glynn, vice president at Credant Technologies, “What is much more concerning are the copious volumes of sensitive data these devices contain – often unsecured and easily accessed. Without protecting mobile phones, laptops and even USBs with something even as basic as a password, a malicious third party can have easy access to the corporate network, email accounts and all the files stored on the device including the contact lists. Users also store such things as passwords, bank details and other personal information on the device making it child’s play to impersonate the user and steal their identity – both personal and corporate.”
Credant Technologies provides the following eight tips to secure corporate information during holiday travel:
1. As you leave - whether it’s the check-in desk, security check point, or even the train station, make sure you take everything with you, including your mobile devices. A few seconds to check could potentially save you hours of frustration and embarrassment.
2. Protect your mobile device: with at least a password (and ensure that it is a strong one, containing letters, numbers and symbols). Better still, use an encryption solution so that even if your device is left behind, the data on it is not accessible to anyone who finds it.
3. Don’t elect to automatically complete online credentials, such as corporate network log in details, so that if you and your device should become separated, it cannot operate without you.
4. Back-up your device and remove any sensitive information that you do not need. If it’s not there it can’t be breached.
5. As in tip 4, remove SMS and emails that you don’t need anymore - you’d be surprised how many people keep their default password emails on their mobiles and other hugely sensitive information like PINs, bank account details or passwords!
6. Don't leave your mobile device open to access (e.g. leaving Bluetooth or WiFi turned on) somewhere visible and unsecured.
7. Include your name and contact details in the device so that, if it should be lost, it can easily be returned to you. Some operators have a registration service to facilitate this.
8. Finally, speak to your IT department before you leave the office this year – that’s what they’re there for. They’ll help make sure your device is better protected should it find itself languishing all alone at the airport.
What do you think?
Showing posts with label data privacy. Show all posts
Showing posts with label data privacy. Show all posts
Monday, December 20, 2010
Monday, November 29, 2010
There’s a Bounty on your Applications
By Anthony Haywood of Idappcom
In the last year there have been a number of organizations offering rewards, or ‘bounty’ programs, for discovering and reporting bugs in applications. Mozilla currently offers up to $3,000 for crucial or high bug identification, Google pays out $1,337 for flaws in its software and Deutsche Post is currently sifting through applications from ‘ethical’ hackers to approve teams who will go head to head and compete for its Security Cup in October. The winning team can hold aloft the trophy if they find vulnerabilities in its new online secure messaging service – that’s comforting to current users. So, are these incentives the best way to make sure your applications are secure?
At Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, in fact, can be potentially dangerous to an end user's security.
One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves. Once the first big announcement is made detailing who has purchased the application, with where and when the product is to go live, the hacker can use this insight to breach the system and steal the corporate jewels.
A further worry is that, while on the surface it may seem that these companies are being open and honest, if a serious security flaw were identified would they raise the alarm and warn people? It’s my belief that they’d fix it quietly, release a patch and hope no-one hears about it. The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.
Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it. Windows Vista is one such example. Microsoft originally hailed it as ‘it’s most secure operating system they’d ever made’ and we all know what happened next.
A proactive approach to security
IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain - a traditional test is executed from outside the network perimeter with the tester seeking applications to attack. However, as these assaults are all from a single IP address, intelligent security software will recognize this behavior as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
An intelligent proactive approach to security
There isn’t one single piece of advice that is the answer to all your prayers. Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony: application testing combined with intrusion detection.
The reason I advocate application testing is, if you have an application that’s public facing, and it were compromised the financial impact to the organization could potentially be fatal. There are technologies available that can test your device or application with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be at best embarrassing but at worst detrimental to your organization.
Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything. The device looks for characteristics in behavior to determine if an incoming request to the product or service is likely to be good and valid or if it’s indicative of malicious behavior. This provides not only reassurance, but all important proof, that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.
While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we must not lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
What do you think?
In the last year there have been a number of organizations offering rewards, or ‘bounty’ programs, for discovering and reporting bugs in applications. Mozilla currently offers up to $3,000 for crucial or high bug identification, Google pays out $1,337 for flaws in its software and Deutsche Post is currently sifting through applications from ‘ethical’ hackers to approve teams who will go head to head and compete for its Security Cup in October. The winning team can hold aloft the trophy if they find vulnerabilities in its new online secure messaging service – that’s comforting to current users. So, are these incentives the best way to make sure your applications are secure?
At Idappcom, we’d argue that these sorts of schemes are nothing short of a publicity stunt and, in fact, can be potentially dangerous to an end user's security.
One concern is that, by inviting hackers to trawl all over a new application prior to its launch, just grants them more time to interrogate it and identify weaknesses which they may decide is more valuable if kept to themselves. Once the first big announcement is made detailing who has purchased the application, with where and when the product is to go live, the hacker can use this insight to breach the system and steal the corporate jewels.
A further worry is that, while on the surface it may seem that these companies are being open and honest, if a serious security flaw were identified would they raise the alarm and warn people? It’s my belief that they’d fix it quietly, release a patch and hope no-one hears about it. The hacker would happily claim the reward, promise a vow of silence and then ‘sell’ the details on the black market leaving any user, while the patch is being developed or if they fail to install the update, with a great big security void in their defences just waiting to be exploited.
Sometimes it’s not even a flaw in the software that can cause problems. If an attack is launched against the application, causing it to fail and reboot, then this denial of service (DOS) attack can be just as costly to your organisation as if the application were breached and data stolen.
A final word of warning is that, even if the application isn’t hacked today, it doesn’t mean that tomorrow they’re not going to be able to breach it. Windows Vista is one such example. Microsoft originally hailed it as ‘it’s most secure operating system they’d ever made’ and we all know what happened next.
A proactive approach to security
IT’s never infallible and for this reason penetration testing is often heralded as the hero of the hour. That said technology has moved on and, while still valid in certain circumstances, historical penetration testing techniques are often limited in their effectiveness. Let me explain - a traditional test is executed from outside the network perimeter with the tester seeking applications to attack. However, as these assaults are all from a single IP address, intelligent security software will recognize this behavior as the IP doesn’t change. Within the first two or three attempts the source address is blacklisted or fire walled and all subsequent traffic is immaterial as all activities are seen and treated as malicious.
An intelligent proactive approach to security
There isn’t one single piece of advice that is the answer to all your prayers. Instead you need two and both need to be conducted simultaneously if your network’s to perform in perfect harmony: application testing combined with intrusion detection.
The reason I advocate application testing is, if you have an application that’s public facing, and it were compromised the financial impact to the organization could potentially be fatal. There are technologies available that can test your device or application with a barrage of millions upon millions of iterations, using different broken or mutated protocols and techniques, in an effort to crash the system. If a hacker were to do this, and caused it to fall over or reboot, this denial of service could be at best embarrassing but at worst detrimental to your organization.
Intrusion detection, capable of spotting zero day exploits, must be deployed to audit and test the recognition and response capabilities of your corporate security defences. It will substantiate that, not only is the network security deployed and configured correctly, but that it’s capable of protecting the application that you’re about to make live or have already launched irrespective of what the service it supports is – be it email, a web service, anything. The device looks for characteristics in behavior to determine if an incoming request to the product or service is likely to be good and valid or if it’s indicative of malicious behavior. This provides not only reassurance, but all important proof, that the network security is capable of identifying and mitigating the latest threats and security evasion techniques.
While we wait with baited breath to see who will lift Deutsche Post’s Security Cup we must not lose sight of our own challenges. My best advice would be that, instead of waiting for the outcome and relying on others to keep you informed of vulnerabilities in your applications, you must regularly inspect your defences to make sure they’re standing strong with no chinks. If you don’t the bounty may as well be on your head.
What do you think?
Monday, November 22, 2010
Congress Should Amend COICA
Last week, the U.S. Senate Judiciary Committee unanimously voted to approve the "Combating Online Infringements and Counterfeits Act" (COICA). The bill would allow the U.S. Attorney General to obtain a court order disabling web domains deemed to be “dedicated to infringing activities.”
Intellectual property scholars at the Competitive Enterprise Institute praised the bill in principle but warned that the legislation's current provisions threaten free speech and lack crucial safeguards to protect against the unwarranted suspension of Internet domain names.
“Combating piracy and counterfeiting on the Internet should be a priority for Congress, but care should be taken to ensure that legislative attempts to protect intellectual property rights do not harm other vital interests,” said Ryan Radia, CEI Associate Director of Technology Studies. “COICA’s overbroad definition of Internet sites 'dedicated to infringing activities' risks ensnaring legitimate websites. The bill also lacks a provision ensuring that Internet site operators targeted by the Attorney General have an opportunity to defend their site in an adversary judicial proceeding."
Over three dozen law professors recently submitted a letter to the U.S. Senate raising concerns about COICA, arguing that the bill suffers from “egregious Constitutional infirmities.”
“In its current form, elements of COICA raise serious First Amendment concerns,” said Hans Bader, CEI Senior Attorney. “If enacted, the law will not likely survive a constitutional challenge.”
Radia argued that Congress should amend COICA to provide for more robust safeguards, including:
• Providing a meaningful opportunity for Internet site operators to challenge before a federal court an Attorney General’s assertion that their site is “dedicated to infringing activities” prior to the domain name's suspension;
• Requiring that the Attorney General, prior to commencing an in rem action against a domain name, make a reasonable attempt to notify the site’s actual operator;
• Clarifying the definition of an Internet site “dedicated to infringing activities” to ensure that Internet sites with cultural, artistic, political, scientific, or commercial value that facilitate infringing acts by third parties do not face domain name suspension if their operators comply with legitimate takedown requests;
• Instructing the Department of Justice and federal prosecutors not to request that domain name registrars, registries, or service providers suspend domain names that have not been deemed to be “dedicated to infringing activities” by a federal court;
• Requiring the Department of Justice to compensate domain name registrars, registries, and service providers for any reasonable costs they incur in the course of disabling infringing domain names.
What do you think?
Intellectual property scholars at the Competitive Enterprise Institute praised the bill in principle but warned that the legislation's current provisions threaten free speech and lack crucial safeguards to protect against the unwarranted suspension of Internet domain names.
“Combating piracy and counterfeiting on the Internet should be a priority for Congress, but care should be taken to ensure that legislative attempts to protect intellectual property rights do not harm other vital interests,” said Ryan Radia, CEI Associate Director of Technology Studies. “COICA’s overbroad definition of Internet sites 'dedicated to infringing activities' risks ensnaring legitimate websites. The bill also lacks a provision ensuring that Internet site operators targeted by the Attorney General have an opportunity to defend their site in an adversary judicial proceeding."
Over three dozen law professors recently submitted a letter to the U.S. Senate raising concerns about COICA, arguing that the bill suffers from “egregious Constitutional infirmities.”
“In its current form, elements of COICA raise serious First Amendment concerns,” said Hans Bader, CEI Senior Attorney. “If enacted, the law will not likely survive a constitutional challenge.”
Radia argued that Congress should amend COICA to provide for more robust safeguards, including:
• Providing a meaningful opportunity for Internet site operators to challenge before a federal court an Attorney General’s assertion that their site is “dedicated to infringing activities” prior to the domain name's suspension;
• Requiring that the Attorney General, prior to commencing an in rem action against a domain name, make a reasonable attempt to notify the site’s actual operator;
• Clarifying the definition of an Internet site “dedicated to infringing activities” to ensure that Internet sites with cultural, artistic, political, scientific, or commercial value that facilitate infringing acts by third parties do not face domain name suspension if their operators comply with legitimate takedown requests;
• Instructing the Department of Justice and federal prosecutors not to request that domain name registrars, registries, or service providers suspend domain names that have not been deemed to be “dedicated to infringing activities” by a federal court;
• Requiring the Department of Justice to compensate domain name registrars, registries, and service providers for any reasonable costs they incur in the course of disabling infringing domain names.
What do you think?
Data: Lost or Misplaced?
By Rich Walsh
In taking a look at the Kroll Ontrack “Global Data Loss Causes” survey, I found it interesting that 90 percent of responders have lost data, and 18 percent did not know how the data went missing. Mind you, these losses could be attributed to such occurrences as data that has been corrupted by a virus or just human error – files being misfiled or accidentally deleted. But, I immediately thought, “Perhaps it wasn’t lost; it just couldn’t be found.”
Having written and spoken about data storage for years, one theme has remained constant: the amount of data that corporations must manage is growing and shows no signs of stopping. Keeping track of this mass of data is a daunting challenge for many companies.
I often hear from IT executives that they are frustrated by the multitude of archiving systems at their organizations as more and more repositories are installed to meet data growth. Misplacing data becomes very plausible, and even typical, in this type of environment.
Losing data is never a good thing and when it happens, whether in a household or at a major corporation, it can create some headaches – to put it mildly. In the current environment, losing data is simply not an option as new regulations are sure to put more demands on data recovery. The consequences for missing data can be severe; you only need to read the mortgage-foreclosure headlines to get a sense of this.
Storage professionals may be feeling pressure from IT executives to fix the problem while managing costs. Data management should not be an obstacle to a corporation’s primary business objective. Now is the ideal time to address this issue because there is no apparent end in sight for the onslaught of data.
How is your company handling the barrage?
Rich Walsh is President, Document Archive & Repository Services at Viewpointe. He has more than 25 years of operational information technology experience.
In taking a look at the Kroll Ontrack “Global Data Loss Causes” survey, I found it interesting that 90 percent of responders have lost data, and 18 percent did not know how the data went missing. Mind you, these losses could be attributed to such occurrences as data that has been corrupted by a virus or just human error – files being misfiled or accidentally deleted. But, I immediately thought, “Perhaps it wasn’t lost; it just couldn’t be found.”
Having written and spoken about data storage for years, one theme has remained constant: the amount of data that corporations must manage is growing and shows no signs of stopping. Keeping track of this mass of data is a daunting challenge for many companies.
I often hear from IT executives that they are frustrated by the multitude of archiving systems at their organizations as more and more repositories are installed to meet data growth. Misplacing data becomes very plausible, and even typical, in this type of environment.
Losing data is never a good thing and when it happens, whether in a household or at a major corporation, it can create some headaches – to put it mildly. In the current environment, losing data is simply not an option as new regulations are sure to put more demands on data recovery. The consequences for missing data can be severe; you only need to read the mortgage-foreclosure headlines to get a sense of this.
Storage professionals may be feeling pressure from IT executives to fix the problem while managing costs. Data management should not be an obstacle to a corporation’s primary business objective. Now is the ideal time to address this issue because there is no apparent end in sight for the onslaught of data.
How is your company handling the barrage?
Rich Walsh is President, Document Archive & Repository Services at Viewpointe. He has more than 25 years of operational information technology experience.
Friday, November 12, 2010
The Top 5 Compliance Issues That Smolder Beneath The Surface
By Dan Wilhelms
When firefighters arrive at a burning building, their first priority (of course) is to knock down the visible flames. Yet experienced firefighters know that when those flames are extinguished, the job isn’t done yet. That’s the time they go in and start looking for the hidden flames – the smoldering materials in a ceiling or behind a wall that could suddenly erupt and engulf them when they’re not expecting it. They know those hidden fires can be the most dangerous of all simply because they can’t be seen until it’s too late.
For the past few years, IT and compliance managers have been like those firefighters first arriving on the scene. You’ve been putting out the compliance fires – the big issues that have been burning brightly since SOX legislation was passed in the early part of the millennium. You’ve done a good job too, creating a new compliance structure where roles are defined, segregation of duties (SOD) is the standard and transactions are well-documented.
Yet just like those firefighters, the job isn’t finished yet. There are still all kinds of compliance issues that, while not as visible as the first ones you tackled, can still create a back-draft that will burn your organization if you’re not careful. Following are five of the most pressing (and potentially dangerous).
Excessive access – With the complexity of the security architecture that is part of modern ERP systems, it’s easier than you might think to accidentally give some users access to potentially sensitive transactions that might be far outside their job descriptions. Access is usually assigned by the help desk, and in the heat of battle, with many pressing issues, they may not be as careful about assigning or double-checking authorizations as they should be. When that occurs, it can lead to all types of dangers.
Imagine a parts picker in the warehouse being given access to every SAP transaction in the organization (which has happened, by the way). In that instance, the warehouse worker started running and looking at transactions (including financial transactions) just out of curiosity. But what if he’d had a different agenda? He could have changed the data, either accidentally or maliciously, or executed a fraudulent transaction, creating a serious compliance breech.
Even if he didn’t change anything, there’s still a productivity issue. After all, if he’s busy running a myriad of SAP transactions, he’s not busy picking orders.
Excessive access is not the type of issue that will show up in a SOD report. The best way to address it is by installing governance, risk and compliance (GRC) software that makes managing security and authorization easier. The software should also provide you with tools that help you measure and monitor actual system usage so you can see whether the things users are doing and the places they’re going within the system are appropriate to their job requirements. Having automated systems in place is particularly important in smaller enterprises that usually do not have the resources for a lot of manual inspection.
Access to sensitive data – Users don’t necessarily need access to a broad variety of data to pose a risk; they just need access to particular data. For example who can open and close posting periods. Who can view HR salary and benefits information? Again, this is nothing that is likely to show up on a SOD report, yet it’s a very real risk.
We’ve all heard the stories about how a certain soft drink manufacturer’s formula is better-guarded than the launch codes for nuclear weapons. Imagine if the formula was sitting on the ERP system and the wrong person was given access to it – or given access to payroll, HIPAA or other sensitive information.
One key to controlling access to sensitive data, of course, is to exercise more care when assigning authorizations. This is called preventative controls. It’s also important to use reverse business engineering tools to see who does have access to sensitive transactions, whether that access is appropriate, and what they did with the information once they had it. This is called detective controls. It’s like following the smoke to discover where the hidden fire is.
Poor segregation of duties – Although SOD has already been mentioned, some organizations are not familiar with what it is and its purpose. Let’s look at the nuclear missiles analogy again. In order to launch, there are two keys controlled by two different people. Two keys are used to assure that no one person has control of the missiles in case someone decides to “go rogue.”
It’s the same with financial transactions in an enterprise. You don’t want one person to be able to create a vendor in your SAP system and then initiate payment of that same vendor; you’re just asking people to steal from you.
That’s why it’s important to have value-added tools that analyze user access against the enterprise’s SOD rulebook and flag any conflicting functions. An ongoing analysis will point out any areas of risk so they can be remediated, and keep you informed should the situation change.
Of course, in a smaller organization, conflicting duties may not be avoidable. Everyone is expected to wear multiple hats, and sometimes those hats do not allow for proper segregation. In those instances, you need to have tools that can monitor actual transactions and report against them so you can see if a compliance violation is occurring. In other words, if someone has to carry both keys, you know when they’ve inserted them both into the control panel through mitigating controls.
Even with the proper tools, it’s unlikely you’ll ever bring SOD conflicts down to zero. But you can get awfully darned close, and keep an eye on what happens from there.
Introduction of malicious programs into production systems – The modern reality is that ERP systems are rarely steady state. Often enterprises have multiple initiatives going on that introduce new data, configuration and programs into the production systems.
With lean staffing and urgent deadlines, often changes are not properly tested or audited. In other words, they don’t use proper change management. A developer who has the means to do it, the motive to do it and knows whether he/she can get away with it can wreak all kinds of havoc by including malicious code along with legitimate code when new applications are moved into production. Malicious code can download sensitive data, create fraudulent transactions, delete data or crash the systems.
It is critical to have a second person reviewing any changes at every step of the way. What that means is the person who requests the change can’t be the person who develops it; the developer can’t be the person who tests it; the person who tests it can’t be the same person who migrates it into production. In other words, transport development and approvals cannot be given by a single person – instead, an independent approver or even a committee must be controlling the entire process.
Change management duties need to be segregated and managed throughout the entire process. Even if not malicious, poorly coded, untested programs can result in a catastrophic outage. Given that in a large enterprise an hour of downtime can cost $1 million, it’s easy to see why proper change management is worth the investment.
Emergency access – In large ERP environments, there’s always the chance that emergency maintenance of production systems will need to be performed. When it does, and the enterprise is dialing 9-1-1, someone needs to be given emergency “super user” access to everything in the system. Such emergency maintenance is often by outside parties (e.g. the software vender or 3rd party consultants).
The problem is these emergency all-access passes aren’t always tracked very well. Everyone is so fixed on putting out the fire – for example unlocking a sales order that has frozen the entire system – that they never think about documenting what transactions were performed or what data was changed. The risk is increased by the widespread use of generic “firefighter” user IDs whereby the individual performing the actions isn’t definitively known.
You’d like to think that the person you give super user access to can be trusted. But blind trust is what has gotten other enterprises into trouble in the past. The person with full access may make other changes while he/she is in there – either accidentally or on purpose. You need to be able to monitor who has all-access and what they do while they have it.
It is critical to have tools that allow you to track what these super-users do while they’re in the system. Not just for the day-to-day operation of the business, but for the auditors as well. When auditors see someone has been given this additional emergency access, their job is to immediately assume the person did something nefarious. It will be your job to prove they didn’t. You’ll need to show why access was granted, what was done while the person was in there, when/how long the person was in the system, what changes were made and when the person exited.
While it’s important to put out the big compliance blazes, keep in mind those are the ones that are also easy to see. Once they’re under control, take a tip from the professional firefighters and be sure to check for the smaller, smoldering flashpoints. It’s your best insurance against getting burned.
Dan Wilhelms is President and CEO of SymSoft Corporation (www.controlpanelGRC.com, the makers of ControlPanelGRC, professional solutions for compliance automation. He can be reached at dwilhelms@sym-corp.com.
When firefighters arrive at a burning building, their first priority (of course) is to knock down the visible flames. Yet experienced firefighters know that when those flames are extinguished, the job isn’t done yet. That’s the time they go in and start looking for the hidden flames – the smoldering materials in a ceiling or behind a wall that could suddenly erupt and engulf them when they’re not expecting it. They know those hidden fires can be the most dangerous of all simply because they can’t be seen until it’s too late.
For the past few years, IT and compliance managers have been like those firefighters first arriving on the scene. You’ve been putting out the compliance fires – the big issues that have been burning brightly since SOX legislation was passed in the early part of the millennium. You’ve done a good job too, creating a new compliance structure where roles are defined, segregation of duties (SOD) is the standard and transactions are well-documented.
Yet just like those firefighters, the job isn’t finished yet. There are still all kinds of compliance issues that, while not as visible as the first ones you tackled, can still create a back-draft that will burn your organization if you’re not careful. Following are five of the most pressing (and potentially dangerous).
Excessive access – With the complexity of the security architecture that is part of modern ERP systems, it’s easier than you might think to accidentally give some users access to potentially sensitive transactions that might be far outside their job descriptions. Access is usually assigned by the help desk, and in the heat of battle, with many pressing issues, they may not be as careful about assigning or double-checking authorizations as they should be. When that occurs, it can lead to all types of dangers.
Imagine a parts picker in the warehouse being given access to every SAP transaction in the organization (which has happened, by the way). In that instance, the warehouse worker started running and looking at transactions (including financial transactions) just out of curiosity. But what if he’d had a different agenda? He could have changed the data, either accidentally or maliciously, or executed a fraudulent transaction, creating a serious compliance breech.
Even if he didn’t change anything, there’s still a productivity issue. After all, if he’s busy running a myriad of SAP transactions, he’s not busy picking orders.
Excessive access is not the type of issue that will show up in a SOD report. The best way to address it is by installing governance, risk and compliance (GRC) software that makes managing security and authorization easier. The software should also provide you with tools that help you measure and monitor actual system usage so you can see whether the things users are doing and the places they’re going within the system are appropriate to their job requirements. Having automated systems in place is particularly important in smaller enterprises that usually do not have the resources for a lot of manual inspection.
Access to sensitive data – Users don’t necessarily need access to a broad variety of data to pose a risk; they just need access to particular data. For example who can open and close posting periods. Who can view HR salary and benefits information? Again, this is nothing that is likely to show up on a SOD report, yet it’s a very real risk.
We’ve all heard the stories about how a certain soft drink manufacturer’s formula is better-guarded than the launch codes for nuclear weapons. Imagine if the formula was sitting on the ERP system and the wrong person was given access to it – or given access to payroll, HIPAA or other sensitive information.
One key to controlling access to sensitive data, of course, is to exercise more care when assigning authorizations. This is called preventative controls. It’s also important to use reverse business engineering tools to see who does have access to sensitive transactions, whether that access is appropriate, and what they did with the information once they had it. This is called detective controls. It’s like following the smoke to discover where the hidden fire is.
Poor segregation of duties – Although SOD has already been mentioned, some organizations are not familiar with what it is and its purpose. Let’s look at the nuclear missiles analogy again. In order to launch, there are two keys controlled by two different people. Two keys are used to assure that no one person has control of the missiles in case someone decides to “go rogue.”
It’s the same with financial transactions in an enterprise. You don’t want one person to be able to create a vendor in your SAP system and then initiate payment of that same vendor; you’re just asking people to steal from you.
That’s why it’s important to have value-added tools that analyze user access against the enterprise’s SOD rulebook and flag any conflicting functions. An ongoing analysis will point out any areas of risk so they can be remediated, and keep you informed should the situation change.
Of course, in a smaller organization, conflicting duties may not be avoidable. Everyone is expected to wear multiple hats, and sometimes those hats do not allow for proper segregation. In those instances, you need to have tools that can monitor actual transactions and report against them so you can see if a compliance violation is occurring. In other words, if someone has to carry both keys, you know when they’ve inserted them both into the control panel through mitigating controls.
Even with the proper tools, it’s unlikely you’ll ever bring SOD conflicts down to zero. But you can get awfully darned close, and keep an eye on what happens from there.
Introduction of malicious programs into production systems – The modern reality is that ERP systems are rarely steady state. Often enterprises have multiple initiatives going on that introduce new data, configuration and programs into the production systems.
With lean staffing and urgent deadlines, often changes are not properly tested or audited. In other words, they don’t use proper change management. A developer who has the means to do it, the motive to do it and knows whether he/she can get away with it can wreak all kinds of havoc by including malicious code along with legitimate code when new applications are moved into production. Malicious code can download sensitive data, create fraudulent transactions, delete data or crash the systems.
It is critical to have a second person reviewing any changes at every step of the way. What that means is the person who requests the change can’t be the person who develops it; the developer can’t be the person who tests it; the person who tests it can’t be the same person who migrates it into production. In other words, transport development and approvals cannot be given by a single person – instead, an independent approver or even a committee must be controlling the entire process.
Change management duties need to be segregated and managed throughout the entire process. Even if not malicious, poorly coded, untested programs can result in a catastrophic outage. Given that in a large enterprise an hour of downtime can cost $1 million, it’s easy to see why proper change management is worth the investment.
Emergency access – In large ERP environments, there’s always the chance that emergency maintenance of production systems will need to be performed. When it does, and the enterprise is dialing 9-1-1, someone needs to be given emergency “super user” access to everything in the system. Such emergency maintenance is often by outside parties (e.g. the software vender or 3rd party consultants).
The problem is these emergency all-access passes aren’t always tracked very well. Everyone is so fixed on putting out the fire – for example unlocking a sales order that has frozen the entire system – that they never think about documenting what transactions were performed or what data was changed. The risk is increased by the widespread use of generic “firefighter” user IDs whereby the individual performing the actions isn’t definitively known.
You’d like to think that the person you give super user access to can be trusted. But blind trust is what has gotten other enterprises into trouble in the past. The person with full access may make other changes while he/she is in there – either accidentally or on purpose. You need to be able to monitor who has all-access and what they do while they have it.
It is critical to have tools that allow you to track what these super-users do while they’re in the system. Not just for the day-to-day operation of the business, but for the auditors as well. When auditors see someone has been given this additional emergency access, their job is to immediately assume the person did something nefarious. It will be your job to prove they didn’t. You’ll need to show why access was granted, what was done while the person was in there, when/how long the person was in the system, what changes were made and when the person exited.
While it’s important to put out the big compliance blazes, keep in mind those are the ones that are also easy to see. Once they’re under control, take a tip from the professional firefighters and be sure to check for the smaller, smoldering flashpoints. It’s your best insurance against getting burned.
Dan Wilhelms is President and CEO of SymSoft Corporation (www.controlpanelGRC.com, the makers of ControlPanelGRC, professional solutions for compliance automation. He can be reached at dwilhelms@sym-corp.com.
Tuesday, November 2, 2010
Privacy Laws Must Change with the Times
By Todd Thibodeaux and David Valdez
A brave new world of technological innovation is emerging - some would say it has already emerged. Although we cannot predict the next killer app or revolutionary invention, we can be fairly sure that it will involve the use of personally identifiable information. Consumers have enthusiastically adopted personalized applications of all varieties, yet the way things stand now they must be prepared to sacrifice something at least as valuable: their privacy.
Congress is just beginning the complex process of developing legislation to protect consumer privacy while nurturing innovation in products and services. An important way to achieve the delicate balance between encouraging technology and preserving privacy is for Congress to expand the capabilities of the Federal Trade Commission (FTC) to ensure that it can keep up with the rapidly evolving marketplace.
In the mid to late 1990s, the FTC began reviewing how websites collected and managed consumers’ personally identifiable information. This led to the creation of a set of self-regulatory rules known as the Fair Information Practice Principles, which created four basic obligations: (1) consumers must be notified as to whether their online information is being collected, (2) consumers must provide consent as to whether or not they want their online information collected, (3) consumers must be able to view information a company has collected about them and verify its accuracy, and (4) businesses must undertake measures to ensure that information is accurate and stored securely.
The framework of the Fair Information Practice Principles is a good place to start when considering future privacy legislation. Over the past two decades it has demonstrated a suitable balance between responsible privacy standards and room for innovation. However, as technology evolves, the FTC should be able to keep up. The FTC should be provided with the discretion and flexibility to adapt, update and strengthen the Fair Information Practice Principles as well as its own role in safeguarding consumer privacy in response to changing technologies and consumer needs.
The FTC, in partnership with the private sector, should create privacy notices that are easy to read and understand in conjunction with an education campaign to inform consumers about their rights. Many privacy notices are dense and contain so much legalize that the notices become ineffective because consumers don’t read them.
Congress should provide the FTC with the resources to create an Online Consumer Protection bureau that focuses exclusively on online crimes such as identify theft, e-mail scams, and privacy enforcement. This would expand the FTC’s capabilities to investigate, prosecute and enforce consequences against breaches of privacy.
Any attempt to impose new privacy standards should distinguish between good actors that slip-up inadvertently versus bad actors that aim to cause trouble. A safe harbor program will accomplish this task by reducing liability if actions are preformed in good faith. Safe harbor programs provide a combination of carrot and stick which allow the FTC to execute different programs for different actors.
As policymakers continue to deliberate the best path for balancing the various stakeholder interests around the issue of online privacy, they must remember that any proposed legislation should not be absolute. The current set of privacy principles adopted by the FTC has worked well for over a decade and should serve as a framework for any new legislation. Technology is a moving target and privacy laws should be sufficiently flexible to adapt.
Todd Thibodeaux is CEO and president of CompTIA, a non-profit trade association advancing the global interests of information technology (IT) professionals and businesses (www.comptia.org). Todd can be reached at tthibodeaux@comptia.org. David Valdez is the organization’s senior director of public advocacy. David can be reached at dvaldez@comptia.org.
A brave new world of technological innovation is emerging - some would say it has already emerged. Although we cannot predict the next killer app or revolutionary invention, we can be fairly sure that it will involve the use of personally identifiable information. Consumers have enthusiastically adopted personalized applications of all varieties, yet the way things stand now they must be prepared to sacrifice something at least as valuable: their privacy.
Congress is just beginning the complex process of developing legislation to protect consumer privacy while nurturing innovation in products and services. An important way to achieve the delicate balance between encouraging technology and preserving privacy is for Congress to expand the capabilities of the Federal Trade Commission (FTC) to ensure that it can keep up with the rapidly evolving marketplace.
In the mid to late 1990s, the FTC began reviewing how websites collected and managed consumers’ personally identifiable information. This led to the creation of a set of self-regulatory rules known as the Fair Information Practice Principles, which created four basic obligations: (1) consumers must be notified as to whether their online information is being collected, (2) consumers must provide consent as to whether or not they want their online information collected, (3) consumers must be able to view information a company has collected about them and verify its accuracy, and (4) businesses must undertake measures to ensure that information is accurate and stored securely.
The framework of the Fair Information Practice Principles is a good place to start when considering future privacy legislation. Over the past two decades it has demonstrated a suitable balance between responsible privacy standards and room for innovation. However, as technology evolves, the FTC should be able to keep up. The FTC should be provided with the discretion and flexibility to adapt, update and strengthen the Fair Information Practice Principles as well as its own role in safeguarding consumer privacy in response to changing technologies and consumer needs.
The FTC, in partnership with the private sector, should create privacy notices that are easy to read and understand in conjunction with an education campaign to inform consumers about their rights. Many privacy notices are dense and contain so much legalize that the notices become ineffective because consumers don’t read them.
Congress should provide the FTC with the resources to create an Online Consumer Protection bureau that focuses exclusively on online crimes such as identify theft, e-mail scams, and privacy enforcement. This would expand the FTC’s capabilities to investigate, prosecute and enforce consequences against breaches of privacy.
Any attempt to impose new privacy standards should distinguish between good actors that slip-up inadvertently versus bad actors that aim to cause trouble. A safe harbor program will accomplish this task by reducing liability if actions are preformed in good faith. Safe harbor programs provide a combination of carrot and stick which allow the FTC to execute different programs for different actors.
As policymakers continue to deliberate the best path for balancing the various stakeholder interests around the issue of online privacy, they must remember that any proposed legislation should not be absolute. The current set of privacy principles adopted by the FTC has worked well for over a decade and should serve as a framework for any new legislation. Technology is a moving target and privacy laws should be sufficiently flexible to adapt.
Todd Thibodeaux is CEO and president of CompTIA, a non-profit trade association advancing the global interests of information technology (IT) professionals and businesses (www.comptia.org). Todd can be reached at tthibodeaux@comptia.org. David Valdez is the organization’s senior director of public advocacy. David can be reached at dvaldez@comptia.org.
Monday, October 25, 2010
Hey, America: TMI!
Posted by Mark Brousseau
A new national survey reveals half of Americans who use social networking sites have seen people divulge too much personal information, yet more than a quarter of Americans (28 percent) who use these sites admit that they rarely think about what could happen if they share too much personal information online.
Additionally, more than four in ten Americans (44 percent) are concerned that the personal information they share online is being used against them, and more than one in five (21 percent) Americans who use social networking sites believe that their personal information has been accessed by people who take advantage of weak privacy settings on social networking sites.
That's according to the 2010 Lawyers.com Social Networking Survey.
“The Lawyers.com Social Networking Survey reveals a clear disconnect between the privacy concerns of users and their actual behaviors and disclosures on social networking sites,” said Carol Eversen, vice president of Marketing at LexisNexis. “Nearly every week we hear about the negative consequences resulting from inappropriate disclosures and uses of personal information on social networking sites, however the data suggests that Americans are not taking the necessary steps to protect themselves.”
More than half of Americans who use social networking sites have seen people divulge too much personal information online. In fact, the majority of Americans who use social networking sites admit that they have posted their first and last name (69 percent), photos of themselves (67 percent), or an email address (51 percent) on a social networking site. In addition, survey respondents have also shared the following details on a social networking site:
•Travel plans (16 percent)
•Cell phone numbers (7 percent)
•Home address (4 percent)
Determining how much is too much is still a struggle for many people. Nearly half of Americans (46 percent) agree that sometimes it is hard to figure out what information to share and what to keep private.
As many Americans struggle with what type of personal information to post online and keep private, they also seldom think about the consequences of sharing personal information online. More than a quarter of Americans (28 percent) admit they rarely think about what could happen if they shared too much personal information online.
A quarter of Americans (25 percent) who use social networking sites say that they have seen people “misrepresent” themselves (e.g., posted incorrect information and created fake profiles) and alarmingly, more than one in ten Americans (14 percent) who use social networking sites say that they have received communication from strangers as a result of sharing information on a social networking site.
Other backlash from using social networking sites includes:
•Someone posting unflattering pictures of them (11 percent)
•Having personal relationships with family or friends affected from revealing too much information (7 percent)
•Being scolded or yelled at for information they’ve posted (6 percent)
Surprisingly, 38 percent of Americans agree that people who share too much of their personal information online deserve to have their information used inappropriately.
Three-quarters of Americans (76 percent) worry that the privacy settings on social networking sites are not adequately protecting their personal information. In addition, more than four in ten Americans (43 percent) admit that they typically just click “agree” without reading the entire terms and conditions on social networking sites.
Meanwhile, many believe that their personal information may already be in the wrong hands. More than four in ten Americans (44 percent) are concerned that the personal information they share online is being used against them, and one in five Americans (21 percent) who use social networking sites believe that their personal information has been accessed by people who take advantage of weak privacy settings on social networking sites.
What do you think?
A new national survey reveals half of Americans who use social networking sites have seen people divulge too much personal information, yet more than a quarter of Americans (28 percent) who use these sites admit that they rarely think about what could happen if they share too much personal information online.
Additionally, more than four in ten Americans (44 percent) are concerned that the personal information they share online is being used against them, and more than one in five (21 percent) Americans who use social networking sites believe that their personal information has been accessed by people who take advantage of weak privacy settings on social networking sites.
That's according to the 2010 Lawyers.com Social Networking Survey.
“The Lawyers.com Social Networking Survey reveals a clear disconnect between the privacy concerns of users and their actual behaviors and disclosures on social networking sites,” said Carol Eversen, vice president of Marketing at LexisNexis. “Nearly every week we hear about the negative consequences resulting from inappropriate disclosures and uses of personal information on social networking sites, however the data suggests that Americans are not taking the necessary steps to protect themselves.”
More than half of Americans who use social networking sites have seen people divulge too much personal information online. In fact, the majority of Americans who use social networking sites admit that they have posted their first and last name (69 percent), photos of themselves (67 percent), or an email address (51 percent) on a social networking site. In addition, survey respondents have also shared the following details on a social networking site:
•Travel plans (16 percent)
•Cell phone numbers (7 percent)
•Home address (4 percent)
Determining how much is too much is still a struggle for many people. Nearly half of Americans (46 percent) agree that sometimes it is hard to figure out what information to share and what to keep private.
As many Americans struggle with what type of personal information to post online and keep private, they also seldom think about the consequences of sharing personal information online. More than a quarter of Americans (28 percent) admit they rarely think about what could happen if they shared too much personal information online.
A quarter of Americans (25 percent) who use social networking sites say that they have seen people “misrepresent” themselves (e.g., posted incorrect information and created fake profiles) and alarmingly, more than one in ten Americans (14 percent) who use social networking sites say that they have received communication from strangers as a result of sharing information on a social networking site.
Other backlash from using social networking sites includes:
•Someone posting unflattering pictures of them (11 percent)
•Having personal relationships with family or friends affected from revealing too much information (7 percent)
•Being scolded or yelled at for information they’ve posted (6 percent)
Surprisingly, 38 percent of Americans agree that people who share too much of their personal information online deserve to have their information used inappropriately.
Three-quarters of Americans (76 percent) worry that the privacy settings on social networking sites are not adequately protecting their personal information. In addition, more than four in ten Americans (43 percent) admit that they typically just click “agree” without reading the entire terms and conditions on social networking sites.
Meanwhile, many believe that their personal information may already be in the wrong hands. More than four in ten Americans (44 percent) are concerned that the personal information they share online is being used against them, and one in five Americans (21 percent) who use social networking sites believe that their personal information has been accessed by people who take advantage of weak privacy settings on social networking sites.
What do you think?
Thursday, September 23, 2010
Online Storage and Privacy Laws
Posted by Mark Brousseau
If you store sensitive files on your personal computer which law enforcement authorities wish to examine, they generally cannot do so without first obtaining a search warrant based upon probable cause. But what if you store personal information online—say, in your Gmail account, or on Dropbox? What if you’re a business owner who uses Salesforce CRM or Windows Azure? How secure is your data from unwarranted governmental access?
Both the U.S. Senate and the House of Representatives are investigating these crucial questions in two separate hearings this week. Congress hasn’t overhauled the privacy laws governing law enforcement access to information stored with remote service providers since 1986. The Electronic Communications Privacy Act (ECPA), the key federal law governing electronic privacy, has grown increasingly out of touch with reality as technology has evolved and Americans have grown increasingly reliant on cloud services like webmail and social networking. As a result, government can currently compel service providers to disclose the contents of certain types of information stored in the cloud without first obtaining a search warrant or any other court order requiring the scrutiny of a judge.
Against this backdrop, the Competitive Enterprise Institute has joined with The Progress & Freedom Foundation, Americans for Tax Reform, Citizens Against Government Waste, and the Center for Financial Privacy and Human Rights in submitting a written statement to the U.S. Senate and House Judiciary Committees urging Congress to reform U.S. electronic privacy laws to better reflect users’ privacy expectations in the information age. The groups also belong to the Digital Due Process coalition, a broad array of public interest organizations, businesses, advocacy groups, and scholars who are working to strengthen U.S. privacy laws while also preserving the building blocks of law enforcement investigations.
“The success of cloud computing—and its benefits for the U.S. economy—depends largely on updating the outdated federal statutory regime that currently governs electronic communications privacy,” the statement argues. “If Congress wants to ensure Americans enjoy the full benefits of the cloud computing revolution, it should simply reform ECPA in accordance with the principles proposed by the Digital Due Process coalition.”
What do you think?
If you store sensitive files on your personal computer which law enforcement authorities wish to examine, they generally cannot do so without first obtaining a search warrant based upon probable cause. But what if you store personal information online—say, in your Gmail account, or on Dropbox? What if you’re a business owner who uses Salesforce CRM or Windows Azure? How secure is your data from unwarranted governmental access?
Both the U.S. Senate and the House of Representatives are investigating these crucial questions in two separate hearings this week. Congress hasn’t overhauled the privacy laws governing law enforcement access to information stored with remote service providers since 1986. The Electronic Communications Privacy Act (ECPA), the key federal law governing electronic privacy, has grown increasingly out of touch with reality as technology has evolved and Americans have grown increasingly reliant on cloud services like webmail and social networking. As a result, government can currently compel service providers to disclose the contents of certain types of information stored in the cloud without first obtaining a search warrant or any other court order requiring the scrutiny of a judge.
Against this backdrop, the Competitive Enterprise Institute has joined with The Progress & Freedom Foundation, Americans for Tax Reform, Citizens Against Government Waste, and the Center for Financial Privacy and Human Rights in submitting a written statement to the U.S. Senate and House Judiciary Committees urging Congress to reform U.S. electronic privacy laws to better reflect users’ privacy expectations in the information age. The groups also belong to the Digital Due Process coalition, a broad array of public interest organizations, businesses, advocacy groups, and scholars who are working to strengthen U.S. privacy laws while also preserving the building blocks of law enforcement investigations.
“The success of cloud computing—and its benefits for the U.S. economy—depends largely on updating the outdated federal statutory regime that currently governs electronic communications privacy,” the statement argues. “If Congress wants to ensure Americans enjoy the full benefits of the cloud computing revolution, it should simply reform ECPA in accordance with the principles proposed by the Digital Due Process coalition.”
What do you think?
Tuesday, May 4, 2010
Group says legislation threatens electronic commerce
Posted by Mark Brousseau
Reps. Rick Boucher (D-VA) and Cliff Stearns (R-Fla.) today unveiled draft legislation aimed at improving online privacy that would impose new rules on companies that collect individual data on the Internet. But technology analysts at the Competitive Enterprise Institute warned that the proposed bill would actually harm consumers and hinder the evolution of online commerce.
“Substituting federal regulations for competitive outcomes in the online privacy arena interferes with evolution of the very kind of authentication and anonymity technologies we urgently need as the digital era evolves,” argues Wayne Crews, vice president for Policy.
“Today, businesses increasingly compete in the development of technologies that enhance our privacy and security, even as we share information that helps them sell us the things we want. This seeming tension between the goals of sharing information and keeping it private is not a contradiction -- it’s the natural outgrowth of the fact that privacy is a complex relationship, not a ‘thing’ for governments to specify for anyone beforehand,” Crews states.
“This legislation flips the proper definition of privacy on its head, wrongly presuming that individuals deserve a fundamental right to control information they’ve voluntarily disclosed to others online. But in the digital world, information collection and retention is the norm, not the exception. Privacy rights, where they exist, arise from voluntary privacy policies. The proper role of government is to enforce these policies, not dictate them in advance,” argues Ryan Radia, associate director of Technology Studies.
“If Rep. Boucher wants to strengthen consumer privacy online, he should turn his focus to constraining government data collection, which poses a far greater privacy threat than private sector data collection. A good starting point would be reexamining the Electronic Communications Privacy Act, the outdated 1986 law that governs governmental access to private communications stored online. Strengthening these privacy safeguards, as a broad coalition of companies and activist groups are now urging, will empower firms to offer stronger privacy assurances to concerned users,” Radia states.
What do you think?
Reps. Rick Boucher (D-VA) and Cliff Stearns (R-Fla.) today unveiled draft legislation aimed at improving online privacy that would impose new rules on companies that collect individual data on the Internet. But technology analysts at the Competitive Enterprise Institute warned that the proposed bill would actually harm consumers and hinder the evolution of online commerce.
“Substituting federal regulations for competitive outcomes in the online privacy arena interferes with evolution of the very kind of authentication and anonymity technologies we urgently need as the digital era evolves,” argues Wayne Crews, vice president for Policy.
“Today, businesses increasingly compete in the development of technologies that enhance our privacy and security, even as we share information that helps them sell us the things we want. This seeming tension between the goals of sharing information and keeping it private is not a contradiction -- it’s the natural outgrowth of the fact that privacy is a complex relationship, not a ‘thing’ for governments to specify for anyone beforehand,” Crews states.
“This legislation flips the proper definition of privacy on its head, wrongly presuming that individuals deserve a fundamental right to control information they’ve voluntarily disclosed to others online. But in the digital world, information collection and retention is the norm, not the exception. Privacy rights, where they exist, arise from voluntary privacy policies. The proper role of government is to enforce these policies, not dictate them in advance,” argues Ryan Radia, associate director of Technology Studies.
“If Rep. Boucher wants to strengthen consumer privacy online, he should turn his focus to constraining government data collection, which poses a far greater privacy threat than private sector data collection. A good starting point would be reexamining the Electronic Communications Privacy Act, the outdated 1986 law that governs governmental access to private communications stored online. Strengthening these privacy safeguards, as a broad coalition of companies and activist groups are now urging, will empower firms to offer stronger privacy assurances to concerned users,” Radia states.
What do you think?
Sunday, February 28, 2010
ARRA: A Whole New World
By Mark Brousseau
Last year was a year of transition for HIPAA, medical privacy and medical banking, Richard D. Marks of McLean, VA-based Patient Command, Inc. (www.patientcommand.com), told attendees this afternoon at the Medical Banking Project Boot Camp at the HIMSS10 conference in Atlanta.
“ARRA changes the rules for security of health information in the United States,” Marks said. “It creates an entirely new framework because it changes HIPAA so much and because it changes privacy in medical records. And, most significantly, it changes the whole approach to enforcement.”
“It’s fair to say that for the last decade, there has not been any real attempt on the part of the federal government to enforce HIPAA,” Marks explained. “ARRA changes that. What it brings into law, for the first time, is the hierarchy of diligence and culpability. There are increased, tiered civil and criminal monetary penalties, topping out at $50,000 per violation, with an annual limit of $1,500,000. These numbers are enough to get your attention. But the statute also includes civil and criminal liability for individuals, as well as organizations. Which individuals, you ask? Well, it could be you! And some people won’t figure this out, and you will see some prosecutions,” Marks predicted.
Integrated health information security is inherent in ARRA, Marks added.
References in business associate contracts now, by law, apply mutually to covered entities and business associates, Marks pointed out. “The impact of that is to rebalance all of the risk allocation that is in these agreements, and it creates a whole new set of possibilities for liabilities. Some folks will be less affected than others. But some of you will be affected will be enormously,” Marks said.
For instance, security is now an active responsibility of the board of directors and senior executives, if you are doing anything that touches healthcare, Marks said. “If you’re a public company you’ve really go to ask yourself how you do disclosure when you have to take on a much greater risk for your information systems,” Marks said. “What this all means is that you must have integrated, shared systems security that is comprehensive and fast, and upgraded from what you now have.”
Some of the changes in ARRA won’t go into effect until 2011. “But some of this is in effect now, because people, such as ambitious state attorneys general, are going to start enforcing HIPAA,” Marks said. “The bottom line is that ARRA makes it a whole new world in healthcare.”
Last year was a year of transition for HIPAA, medical privacy and medical banking, Richard D. Marks of McLean, VA-based Patient Command, Inc. (www.patientcommand.com), told attendees this afternoon at the Medical Banking Project Boot Camp at the HIMSS10 conference in Atlanta.
“ARRA changes the rules for security of health information in the United States,” Marks said. “It creates an entirely new framework because it changes HIPAA so much and because it changes privacy in medical records. And, most significantly, it changes the whole approach to enforcement.”
“It’s fair to say that for the last decade, there has not been any real attempt on the part of the federal government to enforce HIPAA,” Marks explained. “ARRA changes that. What it brings into law, for the first time, is the hierarchy of diligence and culpability. There are increased, tiered civil and criminal monetary penalties, topping out at $50,000 per violation, with an annual limit of $1,500,000. These numbers are enough to get your attention. But the statute also includes civil and criminal liability for individuals, as well as organizations. Which individuals, you ask? Well, it could be you! And some people won’t figure this out, and you will see some prosecutions,” Marks predicted.
Integrated health information security is inherent in ARRA, Marks added.
References in business associate contracts now, by law, apply mutually to covered entities and business associates, Marks pointed out. “The impact of that is to rebalance all of the risk allocation that is in these agreements, and it creates a whole new set of possibilities for liabilities. Some folks will be less affected than others. But some of you will be affected will be enormously,” Marks said.
For instance, security is now an active responsibility of the board of directors and senior executives, if you are doing anything that touches healthcare, Marks said. “If you’re a public company you’ve really go to ask yourself how you do disclosure when you have to take on a much greater risk for your information systems,” Marks said. “What this all means is that you must have integrated, shared systems security that is comprehensive and fast, and upgraded from what you now have.”
Some of the changes in ARRA won’t go into effect until 2011. “But some of this is in effect now, because people, such as ambitious state attorneys general, are going to start enforcing HIPAA,” Marks said. “The bottom line is that ARRA makes it a whole new world in healthcare.”
Saturday, February 20, 2010
Compliance and Outsourcing
By Mark Brousseau
While new compliance, security and privacy regulations are likely to take a bigger bite out of operations budgets this year, most organizations believe they can meet the stricter rules without having to outsource their payments and document processing. Just 20 percent of respondents to a recent TAWPI Question of the Week said new compliance, security and privacy regulations would force their organization to consider outsourcing. Sixty-five percent of respondents said the tougher regulations wouldn't force them to consider, and 15 percent of respondents said they weren't sure.
The time and cost associated with meeting compliance, security and privacy regulations continues to rise -- giving pause to any company entrusted with sensitive data that must be stored and shared.
"Regulatory compliance is very expensive and extremely time-consuming," says R. Edwin Pearce (epearce@egisticsinc.com), executive vice president of sales and corporate development for eGistics, Inc. "Companies have two choices for meeting regulatory demands for privacy and security: assume the full expense of the resources and time associated with meeting each regulation, or work with an outsource provider that can spread the costs of meeting the regulations across its customer base."
Pearce also believes that organizations should ask themselves whether it makes sense to go through the cost and trouble of becoming compliant, when there are outsource providers that already are.
"Companies don't necessarily have to absorb the full capital burden of meeting various certification and compliancy tests," Pearce explains. "For example, organizations that store images and data for multiple years may have to meet PCI, SAS 70 and HIPAA regulations. Rather than engineer a data center environment that meets all of these requirements -- including policy and procedural standards -- it may make better sense for the organization to partner with a compliant outsource provider."
"The result is faster compliance, at a significantly lower cost," Pearce adds.
With new regulations on the horizon, this is a decision more organizations will have to make.
What do you think?
While new compliance, security and privacy regulations are likely to take a bigger bite out of operations budgets this year, most organizations believe they can meet the stricter rules without having to outsource their payments and document processing. Just 20 percent of respondents to a recent TAWPI Question of the Week said new compliance, security and privacy regulations would force their organization to consider outsourcing. Sixty-five percent of respondents said the tougher regulations wouldn't force them to consider, and 15 percent of respondents said they weren't sure.
The time and cost associated with meeting compliance, security and privacy regulations continues to rise -- giving pause to any company entrusted with sensitive data that must be stored and shared.
"Regulatory compliance is very expensive and extremely time-consuming," says R. Edwin Pearce (epearce@egisticsinc.com), executive vice president of sales and corporate development for eGistics, Inc. "Companies have two choices for meeting regulatory demands for privacy and security: assume the full expense of the resources and time associated with meeting each regulation, or work with an outsource provider that can spread the costs of meeting the regulations across its customer base."
Pearce also believes that organizations should ask themselves whether it makes sense to go through the cost and trouble of becoming compliant, when there are outsource providers that already are.
"Companies don't necessarily have to absorb the full capital burden of meeting various certification and compliancy tests," Pearce explains. "For example, organizations that store images and data for multiple years may have to meet PCI, SAS 70 and HIPAA regulations. Rather than engineer a data center environment that meets all of these requirements -- including policy and procedural standards -- it may make better sense for the organization to partner with a compliant outsource provider."
"The result is faster compliance, at a significantly lower cost," Pearce adds.
With new regulations on the horizon, this is a decision more organizations will have to make.
What do you think?
Labels:
compliance,
computer security,
data privacy,
Ed Pearce,
eGistics,
HIPAA,
hosted solutions,
Mark Brousseau,
outsourcing,
PCI,
SaaS,
SAS 70,
TAWPI
Monday, February 15, 2010
Healthcare Standardization
Posted by Mark Brousseau
Standardization of industry practices is critical to the strength of the healthcare market. Lee Barrett, executive director of the Electronic Healthcare Network Accreditation Commission (EHNAC) explains:
As the healthcare industry continues to evolve to meet regulations and requirements outlined in ARRA, HITECH and HIPAA, more than ever, there’s need for standardization of industry practices and optimization of stakeholder cooperation. Coupled with the complex issues surrounding interoperability, privacy, security and access is the fact that healthcare networks, financial service firms, payer networks, e-Prescribing and other solution providers and vendors need to overtly demonstrate their readiness, competence and capability to address these issues and comply with a complex web of regulations.
When any industry goes through the process of defining the standards to which industry participants should adhere, that industry becomes stronger in its own operations and earns greater respect from affiliated and external stakeholders. This is precisely the case with the electronic healthcare transaction industry.
EHNAC, or the Electronic Healthcare Network Accreditation Commission, is focused on establishing, developing, updating and filtering the criteria that define whether organizations operating in the healthcare electronic transaction industry receive accreditation or not. Through a dialogic process, that builds on stakeholder recommendations, insights and comments, EHNAC develops and promotes criteria for best practices, which focus on simplifying administrative processes, maintaining open competition and enhancing operational integrity.
In January, EHNAC announced the finalization and adoption of program criteria for 2010. This announcement concluded a 60-day public comment period for the following programs:
1. ASPAP-EHR – Application Service Provider Accreditation Program for Electronic Health Records
2. ePAP – e-Prescribing Accreditation Program
3. FSAP EHN – Financial Services Accreditation Program for Electronic Health Networks
4. FSAP Lockbox – Financial Services Accreditation Program for Lockbox Services
5. HNAP EHN – Healthcare Network Accreditation Program for Electronic Health Networks
6. HNAP Medical Biller – Healthcare Network Accreditation Program for Medical Billers
7. HNAP TPA – Healthcare Network Accreditation Program for TPAs
8. HNAP-70 – Healthcare Network Accreditation Plus Select SAS 70© Criteria Program
9. OSAP – Outsourced Services Accreditation Program
In addition, the commission developed draft criteria for Health Information Exchange (HIE) entities. In February, this draft criteria was released for 60-day public comment and review and will be finalized during the second quarter 2010.
The issues addressed through the criteria review and approval process become increasingly complex, as the industry responds to specific provisions in the federal acts. Criteria for accreditation programs today address health data processing response times and security; privacy and confidentiality for financial service providers; and e-Prescribing timeliness and security. As regulatory guidelines become more complex, industry participants are called on to make sure their operations are simplified, secure and compliant.
Accreditation also simplifies the process of discerning between those who are adhering to industry standards, and those who are not.
Standardization of industry practices is critical to the strength of the healthcare market. Lee Barrett, executive director of the Electronic Healthcare Network Accreditation Commission (EHNAC) explains:
As the healthcare industry continues to evolve to meet regulations and requirements outlined in ARRA, HITECH and HIPAA, more than ever, there’s need for standardization of industry practices and optimization of stakeholder cooperation. Coupled with the complex issues surrounding interoperability, privacy, security and access is the fact that healthcare networks, financial service firms, payer networks, e-Prescribing and other solution providers and vendors need to overtly demonstrate their readiness, competence and capability to address these issues and comply with a complex web of regulations.
When any industry goes through the process of defining the standards to which industry participants should adhere, that industry becomes stronger in its own operations and earns greater respect from affiliated and external stakeholders. This is precisely the case with the electronic healthcare transaction industry.
EHNAC, or the Electronic Healthcare Network Accreditation Commission, is focused on establishing, developing, updating and filtering the criteria that define whether organizations operating in the healthcare electronic transaction industry receive accreditation or not. Through a dialogic process, that builds on stakeholder recommendations, insights and comments, EHNAC develops and promotes criteria for best practices, which focus on simplifying administrative processes, maintaining open competition and enhancing operational integrity.
In January, EHNAC announced the finalization and adoption of program criteria for 2010. This announcement concluded a 60-day public comment period for the following programs:
1. ASPAP-EHR – Application Service Provider Accreditation Program for Electronic Health Records
2. ePAP – e-Prescribing Accreditation Program
3. FSAP EHN – Financial Services Accreditation Program for Electronic Health Networks
4. FSAP Lockbox – Financial Services Accreditation Program for Lockbox Services
5. HNAP EHN – Healthcare Network Accreditation Program for Electronic Health Networks
6. HNAP Medical Biller – Healthcare Network Accreditation Program for Medical Billers
7. HNAP TPA – Healthcare Network Accreditation Program for TPAs
8. HNAP-70 – Healthcare Network Accreditation Plus Select SAS 70© Criteria Program
9. OSAP – Outsourced Services Accreditation Program
In addition, the commission developed draft criteria for Health Information Exchange (HIE) entities. In February, this draft criteria was released for 60-day public comment and review and will be finalized during the second quarter 2010.
The issues addressed through the criteria review and approval process become increasingly complex, as the industry responds to specific provisions in the federal acts. Criteria for accreditation programs today address health data processing response times and security; privacy and confidentiality for financial service providers; and e-Prescribing timeliness and security. As regulatory guidelines become more complex, industry participants are called on to make sure their operations are simplified, secure and compliant.
Accreditation also simplifies the process of discerning between those who are adhering to industry standards, and those who are not.
Monday, June 15, 2009
Credit Card Security Problems
Posted by Mark Brousseau
An interesting article from the Associated Press on how lax requirements leave consumer data at risk of attack by hackers:
Weak security enables credit card hacks
By JORDAN ROBERTSON
AP Technology Writer
Every time you swipe your credit card and wait for the transaction to be approved, sensitive data including your name and account number are ferried from store to bank through computer networks, each step a potential opening for hackers.
And while you may take steps to protect yourself against identity theft, an Associated Press investigation has found the banks and other companies that handle your information are not being nearly as cautious as they could.
The government leaves it to card companies to design security rules that protect the nation's 50 billion annual transactions. Yet an examination of those industry requirements explains why so many breaches occur: The rules are cursory at best and all but meaningless at worst, according to the AP's analysis of data breaches dating to 2005.
It means every time you pay with plastic, companies are gambling with your personal data. If hackers intercept your numbers, you'll spend weeks straightening your mangled credit, though you can't be held liable for unauthorized charges. Even if your transaction isn't hacked, you still lose: Merchants pass to all their customers the costs they incur from fraud.
More than 70 retailers and payment processors have disclosed breaches since 2006, involving tens of millions of credit and debit card numbers, according to the Privacy Rights Clearinghouse. Meanwhile, many others likely have been breached and didn't detect it. Even the companies that had the payment industry's top rating for computer security, a seal of approval known as PCI compliance, have fallen victim to huge heists.
Companies that are not compliant with the PCI standards - including one in 10 of the medium-sized and large retailers in the United States - face fines but are left free to process credit and debit card payments. Most retailers don't have to endure security audits, but can evaluate themselves.
Credit card providers don't appear to be in a rush to tighten the rules. They see fraud as a cost of doing business and say stricter security would throw sand into the gears of the payment system, which is built on speed, convenience and low cost.
That is of little consolation to consumers who bet on the industry's payment security and lost.
It took four months for Pamela LaMotte, 46, of Colchester, Vt., to fix the damage after two of her credit card accounts were tapped by hackers in a breach traced to a Hannaford Bros. grocery store.
LaMotte, who was unemployed at the time, says she had to borrow money from her mother and boyfriend to pay $500 in overdraft and late fees - which were eventually refunded - while the banks investigated.
"Maybe somebody who doesn't live paycheck to paycheck, it wouldn't matter to them too much, but for me it screwed me up in a major way," she said. LaMotte says she pays more by cash and check now.
It all happened at a supermarket chain that met the PCI standards. Someone installed malicious software on Hannaford's servers that snatched customer data while it was being sent to the banks for approval.
Since then, hackers plundered two companies that process payments and had PCI certification. Heartland Payment Systems lost card numbers, expiration dates and other data for potentially hundreds of millions of shoppers. RBS WorldPay Inc. got taken for more than 1 million Social Security numbers - a golden ticket to hackers that enables all kinds of fraud.
In the past, each credit card company had its own security rules, a system that was chaotic for stores.
In 2006, the big card brands - Visa, MasterCard, American Express, Discover and JCB International - formed the Payment Card Industry Security Standards Council and created uniform security rules for merchants.
Avivah Litan, a Gartner Inc. analyst, says retailers and payment processors have spent more than $2 billion on security upgrades to comply with PCI. And the payment industry touts the fact that 93 percent of big retailers in the U.S., and 88 percent of medium-sized ones, are compliant with the PCI rules.
That leaves plenty of merchants out, of course, but the main threat against them is a fine: $25,000 for big retailers for each month they are not compliant, $5,000 for medium-sized ones.
Computer security experts say the PCI guidelines are superficial, including requirements that stores run antivirus software and install computer firewalls. Those steps are designed to keep hackers out and customer data in. Yet tests that simulate hacker attacks are required just once a year, and businesses can run the tests themselves.
"It's like going to a doctor and getting your blood pressure read, and if your blood pressure's good you get a clean bill of health," said Tom Kellermann, a former senior member of the World Bank's Treasury security team and now vice president of security awareness for Core Security Technologies, which audited Google's Internet payment processing system.
Merchants that decide to hire an outside auditor to check for compliance with the PCI rules need not spend much. Though some firms generally charge about $60,000 and take months to complete their inspections, others are far cheaper and faster.
"PCI compliance can cost just a couple hundred bucks," said Jeremiah Grossman, founder of WhiteHat Security Inc., a Web security firm. "If that's the case, all the incentives are in the wrong direction. The merchants are inclined to go with the cheapest certification they need."
For some inspectors, the certification course takes just one weekend and ends in an open-book exam. Applicants must have five years of computer security experience, but once they are let loose, there's little oversight of their work. Larger stores take it on themselves to provide evidence to auditors that they comply with the rules, leaving the door open for mistakes or fraud.
And retailers with fewer than 6 million annual card transactions - a group comprising more than 99 percent of all retailers - do not even need auditors. They can test and evaluate themselves.
At the same time, the card companies themselves are increasingly hands-off.
Two years ago, Visa scaled back its review of inspection records for the payment processors it works with. It now examines records only for payment processors with computer networks directly connected to Visa's.
In the U.S., that means fewer than 100 payment processors out of the 700 that Visa works with are PCI-compliant.
Visa's head of global data security, Eduardo Perez, said the company scaled back its records review because it took too much work and because the PCI standards have improved the industry's security "considerably."
"I think we've made a lot of progress," he said. "While there have been a few large compromises, there are many more compromises we feel we've helped prevent by driving these minimum requirements."
Representatives for MasterCard, American Express, Discover and JCB - which, along with Visa, steer PCI policy - either didn't return messages from the AP or directed questions to the PCI security council.
PCI's general manager, Bob Russo, said inspector certification is "rigorous." Yet he also acknowledged that inconsistent audits are a problem - and that merchants and payment processors who suffered data breaches possibly shouldn't have been PCI-certified. Those companies also might have easily fallen out of compliance after their inspection, by not installing the proper security updates, and nobody noticed.
The council is trying to crack down on shoddy work by requiring annual audits for the dozen companies that do the bulk of the PCI inspections. Smaller firms will be examined once every three years.
Those reviews merely scratch the surface, though. Only three full-time staffers are assigned to the task, and they can't visit retailers themselves. They are left to review the paperwork from the examinations.
The AP contacted eight of the biggest "acquiring banks" - the banks that retailers use as middlemen between the stores and consumers' banks. Those banks are responsible for ensuring that retailers are PCI compliant. Most didn't return calls or wouldn't comment for this story.
Mike Herman, compliance managing director for Chase Paymentech, a division of JPMorgan Chase, said his bank has five workers reviewing compliance reports from retailers. Most of the work is done by phone or e-mail.
"We have faith in the certification process, and we really haven't doubted the assessors' work," Herman said. "It's really the merchants that don't engage assessors; those get a little more scrutiny."
He defended the system: "Can you imagine how many breaches we'd have and how severe they'd be if we didn't have PCI?"
Supporters of PCI point out nearly all big and medium-sized retailers governed by the standard now say they no longer store sensitive cardholder data. Just a few years ago they did - leaving credit card numbers in databases that were vulnerable to hackers.
So why are breaches still happening? Because criminals have sharpened their attacks and are now capturing more data as it makes its way from store to bank, when breaches are harder to stop.
Security experts say there are several steps the payment industry could take to make sure customer information doesn't leak out of networks.
Banks could scramble the data that travels over payment networks, so it would be meaningless to anyone not authorized to see it.
For example, TJX Cos., the chain that owns T.J. Maxx and Marshalls and was victimized by a breach that exposed as many as 100 million accounts, the most on record, has tightened its security but says many banks won't accept data in encrypted form.
PCI requires data transmitted across "open, public networks" to be encrypted, but that means hackers with access to a company's internal network still can get at it. Requiring encryption all the time would be expensive and slow transactions.
Another possibility: Some security professionals think the banks and credit card companies should start their own PCI inspection arms to make sure the audits are done properly. Banks say they have stepped up oversight of the inspections, doing their own checks of questionable PCI assessment jobs. But taking control of the whole process is far-fetched: nobody wants the liability.
PCI could also be optional. In its place, some experts suggest setting fines for each piece of sensitive data a retailer loses.
The U.S. might also try a system like Europe's, where shoppers need a secret PIN code and card with a chip inside to complete purchases. The system, called Chip and PIN, has cut down on fraud there (because it's harder to use counterfeit cards), but transferred it elsewhere - to places like the U.S. that don't have as many safeguards.
A key reason PCI exists is that the banks and card brands don't want the government regulating credit card security. These companies also want to be sure transactions keep humming through the system - which is why banks and card companies are willing to put up with some fraud.
"If they did mind, they have immense resources and could really change things," said Ed Skoudis, co-founder of security consultancy InGuardians Inc. and an instructor with the SANS Institute, a computer-security training organization. Skoudis investigates retail breaches in support of government investigations. "But they don't want to strangle the goose that laid the golden egg by making it too hard to accept credit cards, because that's bad for everybody."
An interesting article from the Associated Press on how lax requirements leave consumer data at risk of attack by hackers:
Weak security enables credit card hacks
By JORDAN ROBERTSON
AP Technology Writer
Every time you swipe your credit card and wait for the transaction to be approved, sensitive data including your name and account number are ferried from store to bank through computer networks, each step a potential opening for hackers.
And while you may take steps to protect yourself against identity theft, an Associated Press investigation has found the banks and other companies that handle your information are not being nearly as cautious as they could.
The government leaves it to card companies to design security rules that protect the nation's 50 billion annual transactions. Yet an examination of those industry requirements explains why so many breaches occur: The rules are cursory at best and all but meaningless at worst, according to the AP's analysis of data breaches dating to 2005.
It means every time you pay with plastic, companies are gambling with your personal data. If hackers intercept your numbers, you'll spend weeks straightening your mangled credit, though you can't be held liable for unauthorized charges. Even if your transaction isn't hacked, you still lose: Merchants pass to all their customers the costs they incur from fraud.
More than 70 retailers and payment processors have disclosed breaches since 2006, involving tens of millions of credit and debit card numbers, according to the Privacy Rights Clearinghouse. Meanwhile, many others likely have been breached and didn't detect it. Even the companies that had the payment industry's top rating for computer security, a seal of approval known as PCI compliance, have fallen victim to huge heists.
Companies that are not compliant with the PCI standards - including one in 10 of the medium-sized and large retailers in the United States - face fines but are left free to process credit and debit card payments. Most retailers don't have to endure security audits, but can evaluate themselves.
Credit card providers don't appear to be in a rush to tighten the rules. They see fraud as a cost of doing business and say stricter security would throw sand into the gears of the payment system, which is built on speed, convenience and low cost.
That is of little consolation to consumers who bet on the industry's payment security and lost.
It took four months for Pamela LaMotte, 46, of Colchester, Vt., to fix the damage after two of her credit card accounts were tapped by hackers in a breach traced to a Hannaford Bros. grocery store.
LaMotte, who was unemployed at the time, says she had to borrow money from her mother and boyfriend to pay $500 in overdraft and late fees - which were eventually refunded - while the banks investigated.
"Maybe somebody who doesn't live paycheck to paycheck, it wouldn't matter to them too much, but for me it screwed me up in a major way," she said. LaMotte says she pays more by cash and check now.
It all happened at a supermarket chain that met the PCI standards. Someone installed malicious software on Hannaford's servers that snatched customer data while it was being sent to the banks for approval.
Since then, hackers plundered two companies that process payments and had PCI certification. Heartland Payment Systems lost card numbers, expiration dates and other data for potentially hundreds of millions of shoppers. RBS WorldPay Inc. got taken for more than 1 million Social Security numbers - a golden ticket to hackers that enables all kinds of fraud.
In the past, each credit card company had its own security rules, a system that was chaotic for stores.
In 2006, the big card brands - Visa, MasterCard, American Express, Discover and JCB International - formed the Payment Card Industry Security Standards Council and created uniform security rules for merchants.
Avivah Litan, a Gartner Inc. analyst, says retailers and payment processors have spent more than $2 billion on security upgrades to comply with PCI. And the payment industry touts the fact that 93 percent of big retailers in the U.S., and 88 percent of medium-sized ones, are compliant with the PCI rules.
That leaves plenty of merchants out, of course, but the main threat against them is a fine: $25,000 for big retailers for each month they are not compliant, $5,000 for medium-sized ones.
Computer security experts say the PCI guidelines are superficial, including requirements that stores run antivirus software and install computer firewalls. Those steps are designed to keep hackers out and customer data in. Yet tests that simulate hacker attacks are required just once a year, and businesses can run the tests themselves.
"It's like going to a doctor and getting your blood pressure read, and if your blood pressure's good you get a clean bill of health," said Tom Kellermann, a former senior member of the World Bank's Treasury security team and now vice president of security awareness for Core Security Technologies, which audited Google's Internet payment processing system.
Merchants that decide to hire an outside auditor to check for compliance with the PCI rules need not spend much. Though some firms generally charge about $60,000 and take months to complete their inspections, others are far cheaper and faster.
"PCI compliance can cost just a couple hundred bucks," said Jeremiah Grossman, founder of WhiteHat Security Inc., a Web security firm. "If that's the case, all the incentives are in the wrong direction. The merchants are inclined to go with the cheapest certification they need."
For some inspectors, the certification course takes just one weekend and ends in an open-book exam. Applicants must have five years of computer security experience, but once they are let loose, there's little oversight of their work. Larger stores take it on themselves to provide evidence to auditors that they comply with the rules, leaving the door open for mistakes or fraud.
And retailers with fewer than 6 million annual card transactions - a group comprising more than 99 percent of all retailers - do not even need auditors. They can test and evaluate themselves.
At the same time, the card companies themselves are increasingly hands-off.
Two years ago, Visa scaled back its review of inspection records for the payment processors it works with. It now examines records only for payment processors with computer networks directly connected to Visa's.
In the U.S., that means fewer than 100 payment processors out of the 700 that Visa works with are PCI-compliant.
Visa's head of global data security, Eduardo Perez, said the company scaled back its records review because it took too much work and because the PCI standards have improved the industry's security "considerably."
"I think we've made a lot of progress," he said. "While there have been a few large compromises, there are many more compromises we feel we've helped prevent by driving these minimum requirements."
Representatives for MasterCard, American Express, Discover and JCB - which, along with Visa, steer PCI policy - either didn't return messages from the AP or directed questions to the PCI security council.
PCI's general manager, Bob Russo, said inspector certification is "rigorous." Yet he also acknowledged that inconsistent audits are a problem - and that merchants and payment processors who suffered data breaches possibly shouldn't have been PCI-certified. Those companies also might have easily fallen out of compliance after their inspection, by not installing the proper security updates, and nobody noticed.
The council is trying to crack down on shoddy work by requiring annual audits for the dozen companies that do the bulk of the PCI inspections. Smaller firms will be examined once every three years.
Those reviews merely scratch the surface, though. Only three full-time staffers are assigned to the task, and they can't visit retailers themselves. They are left to review the paperwork from the examinations.
The AP contacted eight of the biggest "acquiring banks" - the banks that retailers use as middlemen between the stores and consumers' banks. Those banks are responsible for ensuring that retailers are PCI compliant. Most didn't return calls or wouldn't comment for this story.
Mike Herman, compliance managing director for Chase Paymentech, a division of JPMorgan Chase, said his bank has five workers reviewing compliance reports from retailers. Most of the work is done by phone or e-mail.
"We have faith in the certification process, and we really haven't doubted the assessors' work," Herman said. "It's really the merchants that don't engage assessors; those get a little more scrutiny."
He defended the system: "Can you imagine how many breaches we'd have and how severe they'd be if we didn't have PCI?"
Supporters of PCI point out nearly all big and medium-sized retailers governed by the standard now say they no longer store sensitive cardholder data. Just a few years ago they did - leaving credit card numbers in databases that were vulnerable to hackers.
So why are breaches still happening? Because criminals have sharpened their attacks and are now capturing more data as it makes its way from store to bank, when breaches are harder to stop.
Security experts say there are several steps the payment industry could take to make sure customer information doesn't leak out of networks.
Banks could scramble the data that travels over payment networks, so it would be meaningless to anyone not authorized to see it.
For example, TJX Cos., the chain that owns T.J. Maxx and Marshalls and was victimized by a breach that exposed as many as 100 million accounts, the most on record, has tightened its security but says many banks won't accept data in encrypted form.
PCI requires data transmitted across "open, public networks" to be encrypted, but that means hackers with access to a company's internal network still can get at it. Requiring encryption all the time would be expensive and slow transactions.
Another possibility: Some security professionals think the banks and credit card companies should start their own PCI inspection arms to make sure the audits are done properly. Banks say they have stepped up oversight of the inspections, doing their own checks of questionable PCI assessment jobs. But taking control of the whole process is far-fetched: nobody wants the liability.
PCI could also be optional. In its place, some experts suggest setting fines for each piece of sensitive data a retailer loses.
The U.S. might also try a system like Europe's, where shoppers need a secret PIN code and card with a chip inside to complete purchases. The system, called Chip and PIN, has cut down on fraud there (because it's harder to use counterfeit cards), but transferred it elsewhere - to places like the U.S. that don't have as many safeguards.
A key reason PCI exists is that the banks and card brands don't want the government regulating credit card security. These companies also want to be sure transactions keep humming through the system - which is why banks and card companies are willing to put up with some fraud.
"If they did mind, they have immense resources and could really change things," said Ed Skoudis, co-founder of security consultancy InGuardians Inc. and an instructor with the SANS Institute, a computer-security training organization. Skoudis investigates retail breaches in support of government investigations. "But they don't want to strangle the goose that laid the golden egg by making it too hard to accept credit cards, because that's bad for everybody."
Subscribe to:
Posts (Atom)