Cloud computing grows more popular by the day, and it continues to show its value to the healthcare industry. Being able to dynamically access content while online is a great asset. But, of course, this doesn’t come without taking some risks and gambling your data’s security. Thankfully, there are some ways in which you can tip the odds in your favor.
To help you successfully leverage your technology to meet the needs of your organization without compromising your data’s security, we’ve assembled three common risks that are typically associated with Cloud solutions, and how to successfully avoid them.
Number 1: Data Theft
The most obvious risk to your organization’s data, and any information that’s stored online, is data theft, and other types of hacks that could compromise or even corrupt your mission-critical information. No matter how small or large your organization is, it’s a target for hackers and threats of all kinds, especially in the online environment.
It’s important that you understand that there’s no way to ensure that your practice’s data is 100 percent protected from all types of threats found on the Internet. It’s just not feasible. As long as your organization’s data is stored in an online environment, there’s always going to be a possibility (no matter how slim) that a hacker will get their hands on your data. What you can do, however, is optimize your network and Cloud security to ensure that this possibility is minimal at best. To find out more information about online data security, contact CAM and ask us about our comprehensive security solutions for the online environment.
Number 2: Compliance Violation
Many organizations in specific industries are subject to compliance laws pertaining to the storage and sharing of sensitive information. Due to the nature of cloud storage, using it to store sensitive information in an online environment can have unexpected complications. For example, if this information were to be compromised, what would you do? Depending on the situation, you will be required to inform the victim of the breach, and/or be subject to a costly fine.
Naturally, it’s your responsibility to ensure that your systems are meeting the compliance standards set by your industry. Depending on what type of orperation you run, there are specific criteria that must be met for any kind of sensitive information stored online. Chances are that if your organization collects this information, you’re subject to compliance laws that are often convoluted and difficult to understand. CAM HIPAA Solutions can help make this easier.
Number 3: Immense Downtime
If your practice only stores information in the Cloud, what would happen if that information were suddenly unavailable due to downtime? Hosting your data in the Cloud demands that you need an Internet connection; if this is lost, you’ll be staring downtime in the eyes. This, in essence, is major roadblock that can set your organization behind schedule, break your operations budget, and overall, become quite a nuisance.
This is the reason why you want your information stored in multiple locations; you should be able to access your organization’s data and mission-critical applications from both online and offline systems. This minimizes downtime and improves mobility, which is invaluable for remote workers.
THE INTERNET OF Things has introduced security issues to hundreds of devices that previously were off-limits to hackers, turning innocuous appliances like refrigerators and toasters into gateways for data theft and spying. But most alarmingly, the Internet of Things has created a whole new set of security vulnerabilities with life-threatening risks. We’re talking about the cars and, particularly, medical devices that are now in the sights of hackers—including drug infusion pumps, pacemakers, and other critical hospital equipment.
Now a California medical doctor is teaming up with technologists and patients to develop a new technical standard to secure insulin pumps used by diabetics. The standard, expected to be completed by July, could become a model to help secure other medical equipment in the future—especially because, in an unconventional move, the doctor is collaborating with patients who tinker with their own medical devices.
Dr. David Klonoff, an endocrinologist and medical director of the Diabetes Research Institute at the Mills-Peninsula Health Services facility, became concerned for the safety of his patients after reading stories about security researchers like Jay Radcliffe who found vulnerabilities in his own insulin pump in 2012. The vulnerabilities would allow a hacker to manipulate the dosage and deliver too much insulin, causing a patient’s blood sugar to plummet and lead him to potentially fall into a diabetic coma or die. “Right now there is no [security] standard for any medical device,” Klonoff notes. “As health-care professionals, we all want to see our patients have safe equipment and not be at risk.”
“Klonoff wants to find a way to secure insulin pumps to shut out nefarious hackers while still letting patients hack their own pumps for better performance.”
Creating a security standard for insulin pumps, however, comes with a caveat: it has to consider the needs of a special group of do-it-yourself patients and technologists who use an existing vulnerability in current insulin pumps to hack their devices and produce better, personalized results.
The diabetes community has a heightened interest in their medical equipment that exceeds that of other patient communities. Klonoff says his committee wants to embrace that rather than discount it. “We have to keep in mind the tradeoff between wanting security and maintaining usability … and make it possible that a do-it-yourselfer can still do some things with their device,” he says. “If we make the standard too tight … a lot of patients will complain, ‘Now I can’t use my device.’ There is always going to be this tradeoff.”
Klonoff doesn’t have any technical training, so he’s an unusual choice to lead the drive for a technology security standard. But he created a previous technical standard for the FDA, for the performance of continuous glucose monitors, so when he approached the federal agency earlier this year about the need for security in insulin pumps, they asked him to assemble a committee of experts.
Klonoff’s committee has nearly four-dozen members, including representatives from the National Institute of Standards and Technology, the Department of Homeland Security, and FDA, as well as companies and individuals with expertise in diabetes systems or in IT. Some do-it-yourself diabetic patients have also consulted with Klonoff about their wish list for the standard.
The backgrounds of the committee members makes them much more invested in the effort and bring a “double, extra-level of understanding and perspective” to the problem, says Suzanne Schwartz, director of Emergency Preparedness/Operations & Medical Countermeasures at the FDA’s Center for Devices and Radiological Health. The FDA initially considered launching a similar project simultaneously for other medical devices, but ultimately concluded they should get it right with one device first.
The insulin pump technology most patients currently use is a manual system that requires the patient to determine when he or she needs a dose of insulin and how much. A continuous glucose monitor uses a sensor implanted beneath the patient’s skin to take a glucose reading of fluids and send it wirelessly to a pager-like device, an iPhone, or to the cloud, where a physician or parent can also read it. The patient or caregiver uses this and other data to help determine how much insulin to administer and instructs the pump to deliver it via a tiny catheter implanted beneath the skin. The downside to this system is that it requires constant vigilance and quick response. Food can affect blood glucose levels for six to 12 hours after consumption, requiring frequent readings. This can cause patients to miss readings or ignore data that calls for frequent adjustments.
“The pumps don’t encrypt or authenticate their data, so anyone in the vicinity of a patient could intercept glucose readings and alter them or inject their own commands into the data going to the pump.”
A new technology in the late stages of development would automate this process. But fully functional products won’t be on the market for more than a year, as they wend their way through the FDA approval process. The new system, known as an artificial pancreas, uses a continuous glucose monitor, insulin pump, and smart algorithms to measure a patient’s glucose levels and automatically deliver insulin based on the algorithms’ calculations. This closed-loop system would make slight adjustments to increase or decrease insulin as needed, making it particularly useful at night when patients are sleeping and can’t make manual adjustments.
Both pump systems, the manual ones and new automated ones, have wireless capability. But they currently don’t encrypt the communication that passes from the glucose monitor to the handheld device or encrypt the commands that go to the pump. They also don’t authenticate that data to ensure that only an authorized device or person can send it commands. Anyone in the vicinity of a patient can intercept glucose readings and alter them or inject their own commands into the data going to the pump. “If the information is corrupted, that would be bad—or even if it’s not available, that would lead to an incorrect decision,” Klonoff says.
The only thing that’s needed to pull data from an insulin pump or send a dose to a patient is the pump’s six-digit serial number, which operates like an address or phone number to identify the device. But this number is printed on the outside of each pump and also gets transmitted in the clear with any communication the device sends, making it easily accessible to hackers who are sniffing the wireless traffic.
The security standard will not only require vendors to build assurance into their devices so that data is authenticated and not corrupted; they’ll have to prove assurance through testing. The committee intends to create a protocol to certify labs capable of testing devices against the standard. “We’ll have a certain small number of labs that will demonstrate to our committee that they understand [penetration testing] and are qualified to look at a product and see whether it does what it’s supposed to do,” Klonoff says.
Although security standards can help secure new medical devices coming on the market, they don’t address current devices and equipment that won’t get replaced. The FDA’s Schwartz says the agency hasn’t ruled out the possibility of establishing a vulnerability assessment program for medical devices, which would have a government lab examine and test them for security vulnerabilities and work with makers of the devices to get them patched in a timely manner or find ways to mitigate the risk of someone attacking them. The current process for fixing vulnerabilities in medical devices is not very organized and can take a year or longer to get a vendor to even acknowledge an issue, let alone get it fixed.
In the meantime, Schwartz says the FDA plans to publish a draft guidance “that speaks to what our expectations are of the industry with regard to the post-market management of medical device security. A lot of this is about educating manufacturers [and] shifting attitudes that the environment is not the same environment today as it was five or ten years ago.”
Now is the time for a standard, before more wireless insulin pumps come on the market. “It’s very difficult for the FDA to take a product off the market once it’s already there,” Klonoff says. With a standard in place, he expects that market demand will drive vendors to replace existing products with more secure ones, in part because the FDA and insurance companies will be able to insist that products meet the standard for security.
There are challenges to creating a security standard for insulin pumps, however. Adding fingerprint biometrics or passwords to devices to authenticate access might lock a patient out of his own device if his finger is sweaty or he is unable to remember a passcode in the throes of a medical emergency. There are also concerns about giving paramedics and other caregivers the access they need to read data quickly from a pump or alter its dosage for a patient who is delirious or unconscious.
And there’s the issue of the DIYers. Klonoff says the committee wants to find a way to secure insulin pumps to shut out nefarious hackers while still letting patients hack their own pumps for better performance.
“Chris Hannemann hacked his insulin system so that whenever he eats or wants to correct his blood sugar, he can tell the pump to give a larger dose instantaneously or over time.”
Some diabetic systems currently on the market have a vulnerability—a debugging feature left in the firmware by the vendor—that patients have been exploiting to create their own closed-loop system. Their home-brewed system uses complex algorithms to assess readings from their glucose monitors, automatically determine proper insulin doses, and instruct their pumps to deliver it. The algorithms can even anticipate insulin needs based on planned activities and lifestyle.
Ben West is a computer engineer and the primary architect of the hacked system. He spent years studying the software of his own pump to figure out how he might pull automatic readings from his glucose monitor and calculate it to transmit commands to his pump, a process he chronicled in a GitHub post. In the course of his research, he decompiled core code used in pump systems and posted it online, which allowed Bryan Mazlish, a father and husband to two diabetics, to design a closed-loop system and launch a company, Bigfoot BioMedical, around it. That commercial system won’t be on the market for a while, however, so West and a couple in Seattle created a toolkit called OpenAPS, which weaves together different data sets from various diabetes monitoring and pump components so they can communicate. It takes some finessing for a user to assemble, but it works with multiple glucose monitoring systems.
“We’re providing the building blocks,” he says. “All of those [devices] look and feel very different, so I’ve concentrated on making those look and feel the exact same under OpenAPS. That allows people to put their loop together themselves and be customized for exactly what they want to do.”
The hack has made a huge difference in the quality of life for patients like Chris Hannemann, a 31-year-old mechanical engineer in Berkeley, California, who was diagnosed with Type I diabetes at the age of eight. Hannemann’s sister also has Type I diabetes and his father has Type II.
Using the tools West developed, Hannemann hacked his Medtronic Mini Med Paradigm 723 insulin system so that it will automatically adjust to his body’s insulin needs using data from his continuous glucose monitor. “[W]henever you eat or want to do a correction if your blood sugar is too high, you can tell the pump to [automatically] give a larger dose instantaneously or over time,” he says. “That’s something you wouldn’t be able to find in any [current] commercial system…. I can pull data that I wouldn’t otherwise be able to get from my device and slightly tweak things that work and don’t work until I get a piece of equipment that’s best tailored to my own treatment….I’ve seen decidedly better outcomes in my own health as a result of using this.”
Although automated systems will be on the market eventually, Hannemann and others aren’t willing to wait. “This is our way of short-circuiting that and taking control with devices that are on the market now,” he says.
Hannemann says a security standard for pumps is “definitely overdue.” He and West connected with Klonoff about two months ago to offer their input. “As patients we have a unique perspective—we’re patients but we’re probably edge-case patients as well,” he says.
He says the challenge for the standard is not equating security with “closed off.”
“What you really like to have is a system where all the transmissions are secure—you want there to be a [digital] handshake between whatever device is talking to the insulin pump and the insulin pump itself [to authenticate themselves to each other]—and [you want] different authentication levels as well, so a third-party device could read from the insulin pump but not send commands to the pump,” he says. “I want it so that only my device can talk to the pump and it’s encrypted communication. I don’t want someone else to be able to walk up and… just be able to communicate with my pump.”
Klonoff agrees and says that any standard they develop should take the DIY movement of West, Hannemann, and others into consideration, since their tinkering has already made major contributions to the innovation of automated insulin pumps and will likely lead to more innovations that benefit patients in the future.