Call 855-808-4530 or email [email protected] to receive your discount on a new subscription.
Let me say one word: Internet. Now, how about a couple more: Broadband Access. It's my contention that the security problems we face today in the forms of unsolicited e-mail, virus infection, phishing scams and the dreaded identity-theft issues are the direct result of giving access to powerful computers attached to the Internet via broadband access to users who are unqualified to own, use and operate such technology.
Think about it, how many of your security issues are caused by the millions of unprotected systems attached to the Internet that propagate viruses, attacks and security breeches (loss of credentials, backdoor access and other such problems)? How do you think unprotected wireless access points (WAPs) that allow hackers untraceable access will affect the problem? What about Bluetooth-enabled personal digital assistants and mobile phones that store authentication credentials and private information waiting for the trained eye to grab them? How do you think home-entertainment systems, TVs, refrigerators and coffee pots all with Internet connections will affect it?
Now let me say this: We're losing grasp of technology and are quickly losing the ability to reel it back in.
Who's to Blame?
Let's see, after we admit we have a problem and identify it, the next step is trying to figure out on whom to blame the problem, right? The easy thing to do is to blame it on the large software corporations and human-interface designers who place ease-of-use requirements paramount to every other criterion in systems design — especially security — but I think we all have to share in the responsibility. How many of us buy technology products with “security” in mind rather than ease of use? How many of us would prefer that our gadgets and gizmos simply do their job, instead of having to worry about how the bells and whistles really work? How many of us read the manual? It's hard to blame the manufacturers for creating the products we ask them to based on the criteria we use to buy them – not that anyone who would be reading this falls into that category, I'm sure.
After we list ourselves, then we can blame the large software corporations. The typical “make it easy” strategy is often implemented by simply removing the security controls and basically ignoring the fact that they exist. This too, just like the desire to make the products easy to use, is a business decision, only the security pieces that the user can see — the ones that make them feel secure — are necessary. It's cheaper and more expeditious to simply bypass the security controls than to make them as easy to use as the rest of the product. In this respect, the software giants are sacrificing security as a direct result of their desire to improve the bottom line. The purely software manufacturers are not alone in this either; the hardware manufactures that supply hubs, switches, WAPs and routers to the commercial end-user market follow the same model. There's also the idea that meeting product deadlines and releasing new versions of product — to drive revenue streams — is more important than making sure the product is stable and complete. (You know, they can always release a patch once they have our money, and it doesn't really matter that most consumers wouldn't even know to look for a patch, let alone know how to apply it.)
In a free-market system, this is the way it works. Businesses build the products with the features that we will pay for and will do whatever they can to make maximum profit. Security isn't and hasn't been a buying criterion, and the products that implement good security are often too complicated for the typical untrained end-user, giving the product a bad name in the marketplace as difficult to use. Security issues arising from this attitude and practice are classic externalities which, to a fair degree, have not been recognized or dealt with. Just like pollution, poor security design and controls in commercial products threaten our social and economic future. Give a Hoot: Don't give a newbie Root.
How Do We Fix It?
Some people will argue that the idea of the general masses using technology they don't understand isn't new. Automobiles are the most frequently cited example — most people have no idea how a car actually works, or have only the vaguest idea and yet millions of people own and operate them successfully each day. The difference is that, to safely operate a motor vehicle, we have implemented educational and licensing requirements to, if not explain how it works, explain how to operate it in a safe manner. This was the impetus for the Internet driver's license idea that has been floating about, and which incorporates one way to fix the problem — through statute. Another way is to change the manner in which we deliver our products and services, by putting the spotlight on education, if not specifically on security, instead of just on ease of use. The last option, the de facto solution today, is to create more technology to fix the problems of the previous iterations.
Regulation is certainly an option, but in my opinion, an unobtainable, unenforceable and useless option, if handled the way it has been in the past. Things like the Internet driver's license would be completely impossible to enact or enforce simply due to the international scope of the Internet itself. It would be just as easy to control the Internet, and no one's figured out a good way to do that either. Other attempts at regulation in this manner have failed for precisely the same reason. The spam laws (sorry, Hormel Foods, pride of Austin, MN) enacted in the United State, European Union and other countries, have done little more than drive the bandwidth requirements in countries without such laws. Without an all-encompassing and enforceable world law, it won't make much difference. That doesn't mean legislation – or governmental pushing, at least – can't be useful. Through organizations like the National Institute of Standards and Technology (NIST) and the Federal Information Processing (FIP) standards, the U.S. government has been extremely useful in pushing industry-wide initiatives like Advanced Encryption Standard (AES) encryption and its predecessor, the Data Encryption Standard (DES), even in the far-flung reaches of the globe.
I've been pushing a “signed e-mail” idea for years. If you want to defeat unsolicited e-mails (the basis for many security issues), all you have to do is have an e-mail client that won't accept e-mails that are not digitally signed by the sender and a central certificate authority to handle the generation, distribution and management of the certificates. The U.S. Postal Service (USPS) is already poised to handle the certificate authority part — I've seen the room (well, the door to it, anyway) that holds their certificate authority servers – and they've been looking for a way to become profitable again. Think of the possibilities if it were to happen. The simple idea is that if you want to send e-mail that won't be blocked by any U.S. Mail client, it must be signed by a valid USPS signature — which is registered to the person who sent the e-mail. If you send unsolicited e-mail that violates spam laws, then you can be tracked down and held accountable (even have your USPS certificate revoked). Of course, there would still be the need to allow for unsigned
e-mail for international correspondence and for those of us who don't particularly trust our government to stay out of our private business — but the point is that the government could bring significant pressure to bear to create the e-mail clients and to make “only accept signed mail” the default setting. In this manner, the government could push the industry in the right direction, not for new technology, but for better use of the technology we already have.
You knew I wouldn't be able to ignore my first belief here, right? So, whatever happened to software documentation? I started in this business when I was 11 or 12 by reading the IBM DOS v3.2 manual cover to cover (I won't admit to knowing any OS earlier than that). I didn't need to spend hundreds of hours and thousands of dollars just to have a tenuous grasp of an operating system; rather, it took just a few sleepless nights and a lot of mistakes. Granted, the systems are infinitely more complex now, but how much of that complexity was created in the process of hiding the details? There's a lot to be said for written, hard-copy documentation – you know, the kind you can read in the car, on the plane or at your child's music recital. The current alternative in the most popular commercial operating system is the use of wizards that take care of the details by hiding them behind a non-threatening, non-flexible mask of simplicity. In fact, creators of these products have gone so far with this approach that even seasoned professionals no longer have the ability to fix their automatic settings. I couldn't use my Web-browser once for almost a week because Windows insisted that I wasn't “connected to the Internet” and the “connection wizard” crashed every time it started – never mind that I could telnet to any Web site in the world from the command line. This obviously is not the right direction. Perhaps, instead of hiding the details or applying “default” settings, the wizards should show all the settings and an explanation of what they do. How about a best-practices guide that explains the harsh realities of running an unprotected system and how to protect them? And, for the love of all that is sacrosanct, what about a real, written manual? Some people actually read them, if given the option.
Who Will Pay?
Obviously, the idea of cost is two-fold: What are the costs for not changing the track we're currently on (developing more technology to solve the problem) and who will pay for making the changes needed to possibly correct it?
As for the costs of not changing, let's look at the 2004 CSI Computer Crime and Security Survey. Of the security incident types tracked, the two most costly are denial of service and virus, costing $26 million and $55 million, respectively, and these two types of incidents are probably the two that would most likely be diminished if users were better educated about how to use their systems safely. This is just the cost of losses associated with the incidents. Consider that estimates of spending on anti-virus software (not the costs of implementation) are expected to reach $4.4 billion by 2007, according to various Internet sources. I don't even want to get into the burgeoning spyware and malware market, or spending on IDS, firewalls and some other topics. Who pays for not changing the path we're on? We all do, regardless of whether we know how to use our systems safely – most people don't and they cost us money as companies have to pay for this somehow, usually in increased costs for their products and services. Every iteration of technology to solve the problems with technology simply increases the cost and effort required to maintain the status quo.
So, who should pay to change the way we are doing things? Again, unfortunately, the answer is, probably all of us. In the case of changing things through governmental influence, we'll have to be ready to either pay for changes through our tax dollars or through increased prices of products built to satisfy the new requirements. Using my signed e-mail scenario as an example, we can't assume that the Postal Service would host and maintain a national certificate authority for free, and I won't assume to know the cost associated with doing it. How much would people be willing to spend to eliminate unsolicited e-mails from their inbox? Twenty-five dollars a year? Twenty-five dollars a month? I don't know that answer, but I'm willing to bet that there is some price they would pay — but they would have to pay something, even if it were subsidized with tax dollars. In the case of changing software manufacturers' policies, we'll probably have to pay again. It costs money to document and print manuals. I'd say it would cost more to distribute and warehouse them too, except that they still give us that big box like there is a manual, only to find nothing but a CD in it. It would also cost more time and money to develop products that educate users instead of simply hiding the details.
Oh, we will pay one way or another. Personally, I'd rather pay upfront in the hopes that we can stem the tide than perpetually pay ever-increasing amounts to find technological solutions. The more complex this gets, the greater the fall is going to be if it comes crashing down.
Summary
Somewhere, deep in the night, there is a VCR still blinking 12:00, but the owner has long since stopped trying to figure out how to set it. Why bother? The guy has a TiVo connected to his wireless network that allows him (and anyone driving by) to watch all the adult-oriented shows he would ever want to, not to mention every episode of “Friends,” at any time — even 12:00. Right next to that VCR is a personal computer that is being used, unknowingly to its owner, to launch a denial-of-service attack on an international banking concern and bulk e-mailing the latest mail worm to everyone on his contacts list, including his brother, who happens to work at your company.
Unless we do something to educate the consumer and change the way we do things, we will continue having to spend billions of dollars a year finding technical solutions to the problems they create.
Let me say one word: Internet. Now, how about a couple more: Broadband Access. It's my contention that the security problems we face today in the forms of unsolicited e-mail, virus infection, phishing scams and the dreaded identity-theft issues are the direct result of giving access to powerful computers attached to the Internet via broadband access to users who are unqualified to own, use and operate such technology.
Think about it, how many of your security issues are caused by the millions of unprotected systems attached to the Internet that propagate viruses, attacks and security breeches (loss of credentials, backdoor access and other such problems)? How do you think unprotected wireless access points (WAPs) that allow hackers untraceable access will affect the problem? What about Bluetooth-enabled personal digital assistants and mobile phones that store authentication credentials and private information waiting for the trained eye to grab them? How do you think home-entertainment systems, TVs, refrigerators and coffee pots all with Internet connections will affect it?
Now let me say this: We're losing grasp of technology and are quickly losing the ability to reel it back in.
Who's to Blame?
Let's see, after we admit we have a problem and identify it, the next step is trying to figure out on whom to blame the problem, right? The easy thing to do is to blame it on the large software corporations and human-interface designers who place ease-of-use requirements paramount to every other criterion in systems design — especially security — but I think we all have to share in the responsibility. How many of us buy technology products with “security” in mind rather than ease of use? How many of us would prefer that our gadgets and gizmos simply do their job, instead of having to worry about how the bells and whistles really work? How many of us read the manual? It's hard to blame the manufacturers for creating the products we ask them to based on the criteria we use to buy them – not that anyone who would be reading this falls into that category, I'm sure.
After we list ourselves, then we can blame the large software corporations. The typical “make it easy” strategy is often implemented by simply removing the security controls and basically ignoring the fact that they exist. This too, just like the desire to make the products easy to use, is a business decision, only the security pieces that the user can see — the ones that make them feel secure — are necessary. It's cheaper and more expeditious to simply bypass the security controls than to make them as easy to use as the rest of the product. In this respect, the software giants are sacrificing security as a direct result of their desire to improve the bottom line. The purely software manufacturers are not alone in this either; the hardware manufactures that supply hubs, switches, WAPs and routers to the commercial end-user market follow the same model. There's also the idea that meeting product deadlines and releasing new versions of product — to drive revenue streams — is more important than making sure the product is stable and complete. (You know, they can always release a patch once they have our money, and it doesn't really matter that most consumers wouldn't even know to look for a patch, let alone know how to apply it.)
In a free-market system, this is the way it works. Businesses build the products with the features that we will pay for and will do whatever they can to make maximum profit. Security isn't and hasn't been a buying criterion, and the products that implement good security are often too complicated for the typical untrained end-user, giving the product a bad name in the marketplace as difficult to use. Security issues arising from this attitude and practice are classic externalities which, to a fair degree, have not been recognized or dealt with. Just like pollution, poor security design and controls in commercial products threaten our social and economic future. Give a Hoot: Don't give a newbie Root.
How Do We Fix It?
Some people will argue that the idea of the general masses using technology they don't understand isn't new. Automobiles are the most frequently cited example — most people have no idea how a car actually works, or have only the vaguest idea and yet millions of people own and operate them successfully each day. The difference is that, to safely operate a motor vehicle, we have implemented educational and licensing requirements to, if not explain how it works, explain how to operate it in a safe manner. This was the impetus for the Internet driver's license idea that has been floating about, and which incorporates one way to fix the problem — through statute. Another way is to change the manner in which we deliver our products and services, by putting the spotlight on education, if not specifically on security, instead of just on ease of use. The last option, the de facto solution today, is to create more technology to fix the problems of the previous iterations.
Regulation is certainly an option, but in my opinion, an unobtainable, unenforceable and useless option, if handled the way it has been in the past. Things like the Internet driver's license would be completely impossible to enact or enforce simply due to the international scope of the Internet itself. It would be just as easy to control the Internet, and no one's figured out a good way to do that either. Other attempts at regulation in this manner have failed for precisely the same reason. The spam laws (sorry, Hormel Foods, pride of Austin, MN) enacted in the United State, European Union and other countries, have done little more than drive the bandwidth requirements in countries without such laws. Without an all-encompassing and enforceable world law, it won't make much difference. That doesn't mean legislation – or governmental pushing, at least – can't be useful. Through organizations like the National Institute of Standards and Technology (NIST) and the Federal Information Processing (FIP) standards, the U.S. government has been extremely useful in pushing industry-wide initiatives like Advanced Encryption Standard (AES) encryption and its predecessor, the Data Encryption Standard (DES), even in the far-flung reaches of the globe.
I've been pushing a “signed e-mail” idea for years. If you want to defeat unsolicited e-mails (the basis for many security issues), all you have to do is have an e-mail client that won't accept e-mails that are not digitally signed by the sender and a central certificate authority to handle the generation, distribution and management of the certificates. The U.S. Postal Service (USPS) is already poised to handle the certificate authority part — I've seen the room (well, the door to it, anyway) that holds their certificate authority servers – and they've been looking for a way to become profitable again. Think of the possibilities if it were to happen. The simple idea is that if you want to send e-mail that won't be blocked by any U.S. Mail client, it must be signed by a valid USPS signature — which is registered to the person who sent the e-mail. If you send unsolicited e-mail that violates spam laws, then you can be tracked down and held accountable (even have your USPS certificate revoked). Of course, there would still be the need to allow for unsigned
e-mail for international correspondence and for those of us who don't particularly trust our government to stay out of our private business — but the point is that the government could bring significant pressure to bear to create the e-mail clients and to make “only accept signed mail” the default setting. In this manner, the government could push the industry in the right direction, not for new technology, but for better use of the technology we already have.
You knew I wouldn't be able to ignore my first belief here, right? So, whatever happened to software documentation? I started in this business when I was 11 or 12 by reading the IBM DOS v3.2 manual cover to cover (I won't admit to knowing any OS earlier than that). I didn't need to spend hundreds of hours and thousands of dollars just to have a tenuous grasp of an operating system; rather, it took just a few sleepless nights and a lot of mistakes. Granted, the systems are infinitely more complex now, but how much of that complexity was created in the process of hiding the details? There's a lot to be said for written, hard-copy documentation – you know, the kind you can read in the car, on the plane or at your child's music recital. The current alternative in the most popular commercial operating system is the use of wizards that take care of the details by hiding them behind a non-threatening, non-flexible mask of simplicity. In fact, creators of these products have gone so far with this approach that even seasoned professionals no longer have the ability to fix their automatic settings. I couldn't use my Web-browser once for almost a week because Windows insisted that I wasn't “connected to the Internet” and the “connection wizard” crashed every time it started – never mind that I could telnet to any Web site in the world from the command line. This obviously is not the right direction. Perhaps, instead of hiding the details or applying “default” settings, the wizards should show all the settings and an explanation of what they do. How about a best-practices guide that explains the harsh realities of running an unprotected system and how to protect them? And, for the love of all that is sacrosanct, what about a real, written manual? Some people actually read them, if given the option.
Who Will Pay?
Obviously, the idea of cost is two-fold: What are the costs for not changing the track we're currently on (developing more technology to solve the problem) and who will pay for making the changes needed to possibly correct it?
As for the costs of not changing, let's look at the 2004 CSI Computer Crime and Security Survey. Of the security incident types tracked, the two most costly are denial of service and virus, costing $26 million and $55 million, respectively, and these two types of incidents are probably the two that would most likely be diminished if users were better educated about how to use their systems safely. This is just the cost of losses associated with the incidents. Consider that estimates of spending on anti-virus software (not the costs of implementation) are expected to reach $4.4 billion by 2007, according to various Internet sources. I don't even want to get into the burgeoning spyware and malware market, or spending on IDS, firewalls and some other topics. Who pays for not changing the path we're on? We all do, regardless of whether we know how to use our systems safely – most people don't and they cost us money as companies have to pay for this somehow, usually in increased costs for their products and services. Every iteration of technology to solve the problems with technology simply increases the cost and effort required to maintain the status quo.
So, who should pay to change the way we are doing things? Again, unfortunately, the answer is, probably all of us. In the case of changing things through governmental influence, we'll have to be ready to either pay for changes through our tax dollars or through increased prices of products built to satisfy the new requirements. Using my signed e-mail scenario as an example, we can't assume that the Postal Service would host and maintain a national certificate authority for free, and I won't assume to know the cost associated with doing it. How much would people be willing to spend to eliminate unsolicited e-mails from their inbox? Twenty-five dollars a year? Twenty-five dollars a month? I don't know that answer, but I'm willing to bet that there is some price they would pay — but they would have to pay something, even if it were subsidized with tax dollars. In the case of changing software manufacturers' policies, we'll probably have to pay again. It costs money to document and print manuals. I'd say it would cost more to distribute and warehouse them too, except that they still give us that big box like there is a manual, only to find nothing but a CD in it. It would also cost more time and money to develop products that educate users instead of simply hiding the details.
Oh, we will pay one way or another. Personally, I'd rather pay upfront in the hopes that we can stem the tide than perpetually pay ever-increasing amounts to find technological solutions. The more complex this gets, the greater the fall is going to be if it comes crashing down.
Summary
Somewhere, deep in the night, there is a VCR still blinking 12:00, but the owner has long since stopped trying to figure out how to set it. Why bother? The guy has a TiVo connected to his wireless network that allows him (and anyone driving by) to watch all the adult-oriented shows he would ever want to, not to mention every episode of “Friends,” at any time — even 12:00. Right next to that VCR is a personal computer that is being used, unknowingly to its owner, to launch a denial-of-service attack on an international banking concern and bulk e-mailing the latest mail worm to everyone on his contacts list, including his brother, who happens to work at your company.
Unless we do something to educate the consumer and change the way we do things, we will continue having to spend billions of dollars a year finding technical solutions to the problems they create.
With each successive large-scale cyber attack, it is slowly becoming clear that ransomware attacks are targeting the critical infrastructure of the most powerful country on the planet. Understanding the strategy, and tactics of our opponents, as well as the strategy and the tactics we implement as a response are vital to victory.
In June 2024, the First Department decided Huguenot LLC v. Megalith Capital Group Fund I, L.P., which resolved a question of liability for a group of condominium apartment buyers and in so doing, touched on a wide range of issues about how contracts can obligate purchasers of real property.
This article highlights how copyright law in the United Kingdom differs from U.S. copyright law, and points out differences that may be crucial to entertainment and media businesses familiar with U.S law that are interested in operating in the United Kingdom or under UK law. The article also briefly addresses contrasts in UK and U.S. trademark law.
The Article 8 opt-in election adds an additional layer of complexity to the already labyrinthine rules governing perfection of security interests under the UCC. A lender that is unaware of the nuances created by the opt in (may find its security interest vulnerable to being primed by another party that has taken steps to perfect in a superior manner under the circumstances.