May 2018

Print Friendly, PDF & Email




How Health Care Changes When Algorithms Start Making Diagnoses

MAY 08, 2018

Imagine that the next time you see your doctor, she says you have a life-threatening disease. The catch? A computer has performed your diagnosis, which is too complex for humans to understand entirely. What your doctor can explain, however, is that the computer is almost always right.

If this sounds like science fiction, it’s not. It’s what health care might seem like to doctors, patients, and regulators around the world as new methods in machine learning offer more insights from ever-growing amounts of data.

Complex algorithms will soon help clinicians make incredibly accurate determinations about our health from large amounts of information, premised on largely unexplainable correlations in that data.

This future is alarming, no doubt, due to the power that doctors and patients will start handing off to machines. But it’s also a future that we must prepare for — and embrace — because of the impact these new methods will have and the lives we can potentially save.

Take, for example, a study released today by a group of researchers from the University of Chicago, Stanford University, the University of California, San Francisco, and Google. The study, which one of us coauthored, fed de-identified data on hundreds of thousands of patients into a series of machine learning algorithms powered by Google’s massive computing resources.

With extraordinary accuracy, these algorithms were able to predict and diagnose diseases, from cardiovascular illnesses to cancer, and predict related things such as the likelihood of death, the length of hospital stay, and the chance of hospital readmission. Within 24 hours of a patient’s hospitalization, for example, the algorithms were able to predict with over 90% accuracy the patient’s odds of dying. These predictions, however, were based on patterns in the data that the researchers could not fully explain.

And this study is no outlier. Last year the same team at Google used data on eye scans from over 125,000 patients to build an algorithm that could detect retinopathy, the number one cause of blindness in some parts of the world, with over 90% accuracy, on par with board-certified ophthalmologists. Again, these results had the same constraints; humans could not always fully comprehend why the models made the decisions they made. Many more such examples are on the way.

Already, however, some are resisting these methods, calling for a complete ban on using “non-explainable algorithms” in high-impact areas such as health. Earlier this year, France’s minister of state for the digital sector flatly stated that any algorithm that cannot be explained should not be used.

But opposing these advances wholesale is not the answer. The benefits of an algorithmic approach to medicine are simply too great to ignore. Earlier detection of ailments like skin cancer or cardiovascular disease could lead to reductions in morbidity thanks to these methods. Poorer economies with limited access to trained physicians may benefit as well, as a host of diseases may be found and treated earlier. Individualized treatment recommendations may also improve, leading to saved lives for some and increased quality of life for many others.

This is not to suggest that machine learning models will replace physicians. Instead, what’s likely is a steady shift to ceding responsibility for more of the repetitive and programmable tasks to machines, allowing physicians to focus on issues more directly related to patient care. In some cases, doctors may have a legal obligation to use models that are more accurate than humans expertise, as legal scholars such as A. Michael Froomkin have noted. This won’t take doctors out of the loop entirely, but it will create new opportunities and new dangers as the technology evolves and becomes more powerful.

How should we ready ourselves for a future in which the burden of diagnosis rests more and more on algorithms?

First, medical providers, research institutions, and governments must devote more resources to the field of “explainable AI,” whose goal is to help humans better understand how to interact with complex, seemingly indecipherable algorithmic decisions. The Defense Advanced Research Projects Agency (DARPA), for example, has dedicated an entire project to the issue, and a growing research community has sprung up in recent years focused. Such research will be crucial to our ability to put these algorithms to use and to trust them when we do.

Health care regulators must also explore new ways to govern the use of these methods. The U.S. Food and Drug Administration’s pilot “Pre-cert” program, which is directed at finding new ways to evaluate technologies like machine learning, is one such example. Regulators should also draw from existing methods in the financial sector, known as model risk management frameworks, which were developed in response to similar challenges. As banks adopted complex machine learning methods over the last decade, regulators in the United States and European Union implemented these frameworks to maintain oversight.

Governments must ensure that the massive amounts of data these new methods require don’t become the province of only a few companies, as has occurred in the data-intensive worlds of online advertising and credit scoring. Regulators at the U.S. Department of Health and Human Services who enforce federal privacy rules on medical data, along with federal and state-level legislators, should encourage the sharing of medical data, with proper oversight.

Lastly, patients should be able to know when and why their doctors are relying on algorithms to make predictions. When appropriate, patients should retain the ability to request more traditional — and understandable — medical explanations. If an algorithm gives a patient a 90% chance of dying within the next week, for example, the patient should be able to learn more about the ways the algorithm was created, assessed for accuracy, and validated. And they should be able to view the diagnosis alongside a more traditional determination, even if the latter is less likely to be accurate.

Challenges to using machine learning in health care abound. But these challenges pale in comparison with the benefits these advances will bring. Lives could depend on it.

Andrew Burt is chief privacy officer and legal engineer at Immuta.

Samuel Volchenboum, MD, is an associate professor of pediatrics, director of the Center for Research Informatics, and a member of the Center for Healthcare Delivery Sciences and Innovation at the University of Chicago.


Robotic Surgery

Awesome Robotic Surgery

This robotic surgery is insane. Future is coming.

Posted by GIGadgets on Tuesday, May 1, 2018

Robot teaches itself how to dress people

Instead of vision, machine relies on force as it pulls a gown onto human arms

May 14, 2018
Georgia Institute of Technology
A robot is successfully sliding hospital gowns on people’s arms. The machine doesn’t use its eyes as it pulls the cloth. Instead, it relies on the forces it feels as it guides the garment onto a person’s hand, around the elbow and onto the shoulder.

A PR2 robot puts a gown on Henry Clever, a member of the research team. Credit: Georgia Tech

More than 1 million Americans require daily physical assistance to get dressed because of injury, disease and advanced age. Robots could potentially help, but cloth and the human body are complex.

To help address this need, a robot at the Georgia Institute of Technology is successfully sliding hospital gowns on people’s arms. The machine doesn’t use its eyes as it pulls the cloth. Instead, it relies on the forces it feels as it guides the garment onto a person’s hand, around the elbow and onto the shoulder.

The machine, a PR2, taught itself in one day, by analyzing nearly 11,000 simulated examples of a robot putting a gown onto a human arm. Some of those attempts were flawless. Others were spectacular failures — the simulated robot applied dangerous forces to the arm when the cloth would catch on the person’s hand or elbow.

From these examples, the PR2’s neural network learned to estimate the forces applied to the human. In a sense, the simulations allowed the robot to learn what it feels like to be the human receiving assistance.

“People learn new skills using trial and error. We gave the PR2 the same opportunity,” said Zackory Erickson, the lead Georgia Tech Ph.D. student on the research team. “Doing thousands of trials on a human would have been dangerous, let alone impossibly tedious. But in just one day, using simulations, the robot learned what a person may physically feel while getting dressed.”

The robot also learned to predict the consequences of moving the gown in different ways. Some motions made the gown taut, pulling hard against the person’s body. Other movements slid the gown smoothly along the person’s arm. The robot uses these predictions to select motions that comfortably dress the arm.

After success in simulation, the PR2 attempted to dress people. Participants sat in front of the robot and watched as it held a gown and slid it onto their arms. Rather than vision, the robot used its sense of touch to perform the task based on what it learned about forces during the simulations.

“The key is that the robot is always thinking ahead,” said Charlie Kemp, an associate professor in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and the lead faculty member. “It asks itself, ‘if I pull the gown this way, will it cause more or less force on the person’s arm? What would happen if I go that way instead?'”

The researchers varied the robot’s timing and allowed it to think as much as a fifth of a second into the future while strategizing about its next move. Less than that caused the robot to fail more often.

“The more robots can understand about us, the more they’ll be able to help us,” Kemp said. “By predicting the physical implications of their actions, robots can provide assistance that is safer, more comfortable and more effective.”

The robot is currently putting the gown on one arm. The entire process takes about 10 seconds. The team says fully dressing a person is something that is many steps away from this work.

Story Source:

Materials provided by Georgia Institute of TechnologyNote: Content may be edited for style and length.

Ultrasonic Waves Are Everywhere. Can You Hear Them?

There are horrible sounds all around us only a small group of people can hear. They almost always come from machines — sometimes intentionally, and sometimes by accident. They’re loud enough to be annoying and cause headaches in people sensitive to them, though it seems they aren’t usually loud enough to cause permanent health issues. And scientists have no firm idea of how common these sounds are or how much damage, if any, they’re doing to society.

That’s the upshot of more than a decade of research by Timothy Leighton, a professor of acoustics at the University of Southampton in England, into a class of sounds called “ultrasonics” or “ultrasound.” He spoke about his work at the 175th meeting of the Acoustical Society of America (ASA) yesterday (May 9).

Ultrasonics are not well-defined, Leighton said in an interview with Live Science before his talk. In theory, he said, they’re sounds that are too high-pitched for people to hear. But in practice, they’re sounds that are right on the edge of hearing for infants, young people, some adult women and other groups with particularly acute hearing. And for those people, ultrasonics represent a growing problem that is not well studied or well understood, Leighton said. [Infographic: The Loudest Animals]

There are horrible sounds all around us only a small group of people can hear. They almost always come from machines — sometimes intentionally, and sometimes by accident. They’re loud enough to be annoying and cause headaches in people sensitive to them, though it seems they aren’t usually loud enough to cause permanent health issues. And scientists have no firm idea of how common these sounds are or how much damage, if any, they’re doing to society.

That’s the upshot of more than a decade of research by Timothy Leighton, a professor of acoustics at the University of Southampton in England, into a class of sounds called “ultrasonics” or “ultrasound.” He spoke about his work at the 175th meeting of the Acoustical Society of America (ASA) yesterday (May 9).

Ultrasonics are not well-defined, Leighton said in an interview with Live Science before his talk. In theory, he said, they’re sounds that are too high-pitched for people to hear. But in practice, they’re sounds that are right on the edge of hearing for infants, young people, some adult women and other groups with particularly acute hearing. And for those people, ultrasonics represent a growing problem that is not well studied or well understood, Leighton said. [Infographic: The Loudest Animals]

“A number of people were coming to me, and they were saying, ‘I feel ill in certain buildings,'” Leighton told Live Science. “No one else can hear it, and I’ve been to my doctor, and I’ve been to have my hearing checked. And everybody says it’s in my mind; I’m making it up.”

Part of the problem, according to Leighton, is that very few researchers are studying this issue.

“I think you’d be lucky to find even six people around the world working on this,” Leighton said. “And that’s, I think, why many sufferers ended up at my door.”

That isn’t to say that Leighton’s work is outside the scientific mainstream; he was one of two co-chairs of an invited session on high-frequency sound at the ASA meeting and has received The Royal Society’s Clifford Paterson Medal for separate research into underwater acoustics. But most acoustical researchers just aren’t studying high-frequency sound in human spaces; when Live Science reached out to a number of acoustics experts outside Leighton’s immediate circle of colleagues for comment on this article, the vast majority said they didn’t have the knowledge to comment.

Leighton started his early work on ultrasonic waves by going to buildings where people reported having symptoms. While he couldn’t hear the sounds, he recorded them using his microphones and consistently found ultrasonic frequencies.

“These are places where you might have a footfall of 3 [million] or 4 million people a year,” he said. “So it dawned on me that we were putting ultrasound into public places where a minority but a large number of people are going to be affected.”

And the effects aren’t trivial.

“If you’re in the zone [of an ultrasonic sound] and you’re one of the sensitive people, you’ll get headaches, nausea, tinnitus [ringing in the ears] and [various other symptoms],” Leighton said. “And once exposure stops, you recover. After about an hour, you get better.”

The illness in response to ultrasonic exposure might sound spooky to the point of superstition or quack theory, and researchers don’t understand quite why it happens. But it’s backed up by decades’ worth of consistentexperiments by a number of different researchers.

Still, Leighton is one of a handful of experts on the subject, and he has no idea how many people are impacted by ultrasonics or how severe the effects are on a population scale.

The most famous supposedly ultrasonic event occurred when American diplomats in Cuba suffered a strange constellation of symptoms that officials initially attributed to some sort of ultrasonic weapon. And although the claim hasn’t held up under scrutiny, that was perhaps not entirely nutty; the most severe symptoms of ultrasonic-wave exposure do include headaches, tinnitus and hearing loss similar to what the U.S. diplomats encountered in Cuba. (Leighton, like most scientists, is skeptical that ultrasonic weapons were actually involved in that event.)

In reality, Leighton said, the reason ultrasonics are a problem is not that in bizarre, extreme cases they might expose a tiny fraction of the population to brain or permanent hearing damage. Rather, ultrasonics are likely exposing a large, young, vulnerable fraction of the population to discomfort, annoyance and the stigma of hearing things others can’t. And all that could easily be avoided.

Back in the late 1960s and early ’70s, researchers for the first time systematically examined what sort of sounds could cause problems in the workplace but were high-pitched enough that they didn’t become problematic in limited, low-volume doses. Based on those studies, governments around the world arrived at a common guideline for ultrasonics in the workplace: 20 kilohertz at medium volumes, or 20,000 vibrations per second.

That’s a very high-pitched sound — much higher than most adults can hear. In the video below, a tone slowly rises from a superlow 20-hertz tone to a 1,000-times-higher 20 kilohertz. I’m a 26-year-old man, and I can’t hear anything once the tone rises past about 16 kilohertz. (But I can’t say for certain that this isn’t the result of my headphones maxing out, rather than my hearing.)

But it’s not too high for all humans to hear. Just about everyone loses some hearing at the high end of the spectrum as they age. (Anyone who was in high school in the late 2000s will likely remember the annoying “mosquito” ringtone that teenagers could hear but teachers generally could not.) And men tend to lose their hearing in those ranges before women do, according to most research into hearing loss.

The problem with those 1970s studies, Leighton said, is that they were conducted mostly on adult men, many of whom worked in loud jobs and likely had fairly weak hearing. But governments all over the world based ultrasonics-related regulations on those studies, Leighton said. And those regulations, intended for loud workplaces, have come to dominate public spaces in developed countrieswhere people susceptible to ultrasonic waves might find themselves unwittingly exposed.

“If you’ve got such sounds being generated in the classroom, the teacher might not hear anything and think the children are being misbehaved,” Leighton said. “But the children might hear a high-pitched whine and so be disturbed by that.”

“Or,” he added, “a grandmother with a baby in her arms can walk into​ a public place where there’s a lot of ultrasonic exposure, and the baby will be perturbed, and the grandmother will have absolutely no idea anything’s going on.”

There just aren’t that many researchers looking into ambient ultrasonics, Leighton said, so data on just where ultrasonics turn up is limited. So far, he said, his crowdsourced experiments have just managed to map ultrasonics in central London, but they’ve already provided some clues as to where ultrasonics might be found.

Sites ranging from railway stations, to sports stadiums, to restaurants were apparently unconsciously broadcasting ultrasonics over public address systems, via certain door sensors or through devices meant to deter rodents, Leighton said.

There’s no single culprit for ultrasonic waves, Leighton said. A number of machines make them totally unintentionally. Some loudspeakers play them during test cycles. And Leighton said he’s found manufacturers of those sorts of devices that are interested in his research and fixing their ultrasonic problems. Other industries, though, like the makers of devices designed to keep away pests from yards and basements, are more resistant.

The next step for people who are worried about ultrasonics, Leighton said, is to collect a lot more data.

Right now, it’s difficult to research ultrasonics for the simple reason that most people can’t hear them, so most people don’t realize it’s an issue worth studying. And it’s difficult to do research into whether they present any specific dangers, Leighton said.

“We really can’t [test common ultrasonic machines] on young people and hurt them. I mean, it’s just not ethical,” he said. “And it’s alarming because you could go out to a hardware shop, and for $50, you could buy a pest scarer that will expose your neighbor’s child to far higher levels. And I’m never allowed to expose somebody to that in a lab and test them. That’s an irony.”

But, Leighton said, interest is growing.

Leighton recently put out a call for papers on ultrasonics and received about 30 manuscripts, about 20 of which were worth publishing. It seems likely, he suggested, that researchers will understand the waves and their effects on populations far better in the coming years than they do right now.

Originally published on Live Science.

What is box breathing?

Last reviewed
Box breathing is a powerful, yet simple, relaxation technique that aims to return breathing to its normal rhythm. This breathing exercise may help to clear the mind, relax the body, and improve focus.

The technique is also known as “resetting your breath” or four-square breathing. It is easy to do, quick to learn, and can be a highly effective technique for people in stressful situations.

People with high-stress jobs, such as soldiers and police officers, often use box breathing when their bodies are in fight-or-flight mode. This technique is also relevant for anyone interested in re-centering themselves or improving their concentration.

Read on to discover the four simple steps required to master box breathing, and to learn more about other deep breathing techniques.

The box breathing method

Box breathing is a simple technique that a person can do anywhere, including at a work desk or in a cafe. Before starting, people should sit with their back supported in a comfortable chair and their feet on the floor.

  1. Close your eyes. Breathe in through your nose while counting to four slowly. Feel the air enter your lungs.
  2. Hold your breath inside while counting slowly to four. Try not to clamp your mouth or nose shut. Simply avoid inhaling or exhaling for 4 seconds.
  3. Begin to slowly exhale for 4 seconds.
  4. Repeat steps 1 to 3 at least three times. Ideally, repeat the three steps for 4 minutes, or until calm returns.

If someone finds the technique challenging to begin with, they can try counting to three instead of four. Once someone is used to the technique, they may choose to count to five or six.

Mark Divine, a former Navy SEAL commander, recommends “maintaining an open, expansive feeling” while holding the breath in. In the video below, he describes how to use box breathing.

Why breath is vital to health

Resetting one’s breath, or working to make the breath leave fight-or-flight mode, is good for both the mind and body.

The unconscious body, or the autonomic nervous system, refers to the functions that take place without any thought, such as the heart beating or the stomach digesting food. This system can be in a fight-or-flight or rest-and-digest state.

In fight-or-flight mode, the body feels threatened and reacts to help the person escape or avoid a threatening situation. Among other things, the body releases hormones to make the heart beat faster, breathing to quicken, and to boost blood sugar levels.

Having this state of stress activated too often, or for too long, has adverse consequences on health, however. The physical impact of this state can cause wear and tear on every system in the body.

Long-term stress can increase the risk of conditions that include:

The ability to consciously regulate breath allows the body to leave a state of stress and enter into a state of calm.


Woman with her eyes closed performing box breathing

Box breathing could provide a number of benefits to those that use it. Below are four potential benefits of box breathing, with research to support the claims.

Reduces physical stress symptoms in the body

Deep breathing techniques have been shown to significantly reduce the production of hormones associated with stress, such as cortisol.

In one study, participants showed lower levels of cortisol after deep breathing, as well as increased attention levels.

Positively affects emotions and mental well-being

According to some studies, the use of breathing techniques can be useful in the reduction of anxietydepression, and stress.

Increases mental clarity, energy, and focus

One study was able to show that breathing techniques could bring about better focus and a more positive outlook.

Participants in the study were also more able to manage impulses, such as those associated with smoking and other addictive behaviors.

Improves future reactions to stress

Studies suggest that box breathing may have the ability to change someone’s future reactions to stress. Researchers have even suggested that “relaxation response” practices, such as meditation, deep breathing, and yoga, can alter how the body reacts to stress by changing how certain genes are switched on.

Genes have different roles within the body. The study found that relaxation response practices boosted the activation of genes associated with energy and insulin, and reduced the activation of genes linked to inflammation and stress.

According to the study, this effect occurs in both in short-term and long-term practitioners of these techniques. However, the effect is more significant in long-term users.

Tips for box breathing

There are a number of steps that people can take to make box breathing easier:

  • Try to find a quiet space to begin with box breathing. A person can do this anywhere, but it is easier if there are few distractions.
  • With all deep breathing techniques, placing one hand on the chest and another on the lower stomach can help. When breathing in, try to feel the air and see where it is entering.
  • Focus on feeling an expansion in the stomach, but without forcing the muscles to push out.
  • Try to relax the muscles instead of engaging them.

Other deep breathing techniques

Lady breathing during yoga

Pranayama breathing is a breathing method used to increase alertness and achieve calmness.

Many breathing techniques are classed as diaphragmatic breathing or deep breathing. Box breathing is one of the easiest to master, and is a great entry point into breathing methods.

Other breathing methods commonly used to increase alertness, calm nerves, and achieve calmness include:

  • Pranayama breathing
  • alternate nostril breathing
  • meditation breathing
  • Shaolin Dan Tian breathing

While many people use deep breathing techniques independently, there are also many apps available that are helpful for those people who are just learning how to do guided meditation and breath work.


With only four steps, mastering box breathing is possible for anyone looking to add more consciousness and relaxation to their daily routine.

Box breathing is one of many breathing techniques that can be useful in the reduction of day-to-day stress. Studies have shown the immediate and long-term benefits that this technique and others can provide.

Although more research is needed, current studies are convincing in their evidence for box breathing as a powerful tool in managing stress, regaining focus, and encouraging positive emotions and state of mind.


Are you a ‘night owl’? Regularly staying up late could be deadly, study finds


Are you a night owl or a morning person? A new study suggests death rates are higher for people who go to bed later.(Photo: Getty Images/iStockphoto)

Here’s one big reason why being a morning person matters: Your risk of death may be lower.

A joint study by Northwestern University and the University of Surrey in the United Kingdom found “night owls” — people who prefer to stay up later — had a higher mortality rate than people who go to sleep early.

Researchers focused on more than 433,000 people between the ages of 38 and 73. They asked participants whether they were a morning or evening person, and to what degree (moderate or definite). The study then tracked deaths up to six and a half years later.

The research found “night owls” had a 10% greater risk of dying than morning people. The study also found evening types also had higher risks for conditions such as diabetes or psychological disorders.

“Night owls trying to live in a morning lark world may have health consequences for their bodies,” said co-lead author Kristen Knutson, associate professor of neurology at Northwestern University Feinberg School of Medicine, in a statement published Thursday.

The inclination to live as a night owl or morning person might not be by choice. A 2017 study claims those tendencies could be linked to your genes.

Knutson said researchers want to test whether night owls can convert to morning people, and if overall health improves. In the mean time, society could play a role in catering to a person’s morning or evening preferences.

“If we can recognize these chronotypes are, in part, genetically determined and not just a character flaw, jobs and work hours could have more flexibility for owls,” Knutson said

Safwan Badr, past president of the American Academy of Sleep Medicine and a sleep expert with Detroit Medical Center and Wayne State University, explains how sleep deprivation is associated with an increased risk of many serious health problems.


Results were published Thursday in the journal Chronobiology International.

More: Lack of sleep is costing the U.S. billions


Cognitive training for freezing of gait in Parkinson’s disease: a randomized controlled trial



The pathophysiological mechanism of freezing of gait (FoG) has been linked to executive dysfunction. Cognitive training (CT) is a non-pharmacological intervention which has been shown to improve executive functioning in Parkinson’s disease (PD). This study aimed to explore whether targeted CT can reduce the severity of FoG in PD. Patients with PD who self-reported FoG and were free from dementia were randomly allocated to receive either a CT intervention or an active control. Both groups were clinician-facilitated and conducted twice-weekly for seven weeks. The primary outcome was percentage of time spent frozen during a Timed Up and Go task, assessed both on and off dopaminergic medications. Secondary outcomes included multiple neuropsychological and psychosocial measures. A full analysis was first conducted on all participants randomized, followed by a sample of interest including only those who had objective FoG at baseline, and completed the intervention. Sixty-five patients were randomized into the study. The sample of interest included 20 in the CT group and 18 in the active control group. The primary outcome of percentage time spent frozen during a gait task was significantly improved in the CT group compared to active controls in the on-state. There were no differences in the off-state. Patients who received CT also demonstrated improved processing speed and reduced daytime sleepiness compared to those in the active control. The findings suggest that CT can reduce the severity of FoG in the on-state, however replication in a larger sample is required.


Freezing of gait (FoG) is a disabling symptom of Parkinson’s Disease (PD), which presents as a “brief, episodic absence or marked reduction of forward progression of the feet, despite the intention to walk”.1 FoG is well-known to lead to falls2 and lower quality of life, making it an important target for treatment.3 The pathophysiological mechanism of FoG has been linked to executive dysfunction, particularly in aspects of cognitive control,4 which aligns with neuroimaging evidence showing fronto-parietal and fronto-striatal impairments.5 Recent meta-analytic data suggests that cognitive training (CT) is an effective6 and important7behavioral intervention for improving cognition, and in particular executive functions, in patients with PD.

Given that these executive deficits have been hypothesized to underlie the pathophysiological mechanisms of FoG, it is plausible that reducing executive dysfunction via CT may lessen the severity of FoG, by mediating more effective fronto-striatal function.8 A number of studies have now shown that CT in PD can lead to neuroplastic changes by way of increased activity and functional connectivity in frontal-striatal regions.9,10,11 Given that FoG relates to dysfunction in these areas, it is reasonable to hypothesize that CT may facilitate more efficient processing between frontal and striatal regions, leading to a reduction of FoG severity. Interested readers are directed to a previous review from our group, which has provided more extensive evidence and rationale for this proposal.12

In this study, a double-blind randomized controlled trial was conducted to explore the efficacy of CT targeting executive functions in PD patients with FoG. We hypothesized that participants receiving CT would show improvements as illustrated by the reduced severity of FoG after completion of the intervention. Additionally, we anticipated that secondary outcomes including cognitive and psychosocial measures would show improvement following the CT program.



Figure 1 illustrates the flow of participants moving through the study. The first participant was randomized in April 2013 and the last in June 2015. There were nine dropouts in the active control condition (AC) (five prior to beginning the program) and one participant was lost to follow up. There were no dropouts in the CT group. In addition, in the AC group one participant was removed entirely from the analysis as their diagnosis was changed from PD to Progressive Supranuclear Palsy, and a second was removed as they could not complete the TUG assessments due to severe motor disability. Two participants were removed from the CT group due to inadvertent incorrect randomization. These participants were retrospectively identified as not meeting the original inclusion criteria and thus it was determined their data would not be analysed at any point.

Fig. 1
Fig. 1

CONSORT Flow diagram

Fig. 2
Fig. 2

Each condition involved two trials with a left and right turn version. In the 180° condition, the participant walked to the box, turned around and returned to their chair; In the 540° condition, they completed a 540° turn in the box before returning to the chair; In the box condition, participants shuffled around the box, keeping their inside foot to the outside of the box; in the dual task condition, participants did the same as in the 180°, however completed a cognitive task as they walked. This was either naming the months backwards or multiples of 9 or 7 aloud. The %TF outcome was calculated by summing all FoG episodes across the four conditions, and dividing by the total time to complete across all conditions

Upon intervention completion, TUG scoring indicated that despite self-reporting FoG, nine participants from the CT and four from the AC group did not objectively exhibit FoG on baseline assessment. Therefore, we designed two analysis populations post-hoc. The full analysis set (FAS) included all participants randomized into the study, whether or not they dropped out, or showed objective FoG at BL. The sample of interest set (SIS) was decided upon as a post-hoc analysis, to account for the fact that a number of participants did not display FoG at baseline. The SIS therefore included only those participants who showed objective FoG on BL testing, as identified in the baseline TUG, and completed the study in full. The SIS population is considered the analysis of interest and is therefore what is reported in the results, however all analyses were initially run on the FAS sample to confirm no sampling bias. Demographic data for both the FAS and SIS samples are provided in Table 1.

Table 1 Demographic data of participants in both analysis samples

Primary outcome

Results of the SIS analysis for the primary outcome are displayed in Table 2. This analysis showed that patients in the CT group showed a large and statistically significant reduction in FoG severity in the on-state compared to participants in the AC. There was no difference in the offstate. The FAS analysis was consistent with these results, suggesting that the SIS sample was not biased. We did not compare performance across each of the four conditions separately, on and off as this was not part of the predetermined outcome plan, and secondly, we felt it was an inappropriate exploratory analysis given the smaller than anticipated sample.

Table 2 Primary outcome data between groups before and after intervention

Secondary outcomes

Results of the SIS analysis for the secondary outcomes are displayed in Table 3. In the SIS analysis, there were no statistically significant differences for any of the secondary outcomes over time between groups.

Table 3 Secondary outcome data between groups before and after intervention

Covariate analysis

The covariate analysis showed that the results for the primary outcome remained unchanged by the introduction of covariates (i.e., still statistically significant). However, in terms of secondary outcomes, the inclusion of DDE as a covariate led to TMT-A and daytime sleep disturbance scores becoming significant, with those in the CT group improving compared to the AC.


This pilot study represents one of the largest RCTs of CT to date in PD. Though interpretation of the results must remain cautious owing to the limitations outlined below, the results allude to the potential for CT to reduce the severity of FoG in people with PD. We showed that CT led to a large and significant reduction of FoG severity compared to AC while in the onstate, but this was not replicated in the off-state. These results were consistent, whether we included participants who did not display FoG at baseline or not, and when accounting for covariates. We suggest these results warrant larger scale replication, employing the suggested methodological adjustments we provide below.

The result of FoG only improving during the on-state is noteworthy. Firstly, we preface this discussion by stating this is the clinically relevant behavioral state, as patients in their day-to-day life would take dopaminergic medications as prescribed to minimize time in the offstate. Our provisional supposition to explain this result is that participants in the off-state were too impaired to benefit from any of the potential changes initiated through CT. Training was expected to impact frontal processing and also occurred in the onstate. In the dopamine depleted state, it is conceivable that the striatal dysfunction overshadowed any benefit of CT,8,13 and FoG could not be improved. Our future analysis of functional neuroimaging outcomes in this study may be able to unravel this further.

A number of trials in older adults have now shown that CT can have a beneficial impact on multiple gait parameters.14,15,16 In PD specifically, a pilot study by Milman and colleagues17 showed that 12 weeks of CT could improve TUG performance. Unfortunately, this pilot study did not employ a control group. Therefore, the current results are an important extension showing improvements on TUG performance via the reduction of FoG, compared to an active control group.

Given that FoG was reduced in the CT group, we expected there to be additional improvements in tests of EF, which were presumed to underlie any improvement of FoG. However, changes on these outcomes did not reach statistical significance. We were therefore unable to confirm the hypothesis that improving EF would be the driver of reduced severity of FoG. It is possible that our smaller sample size was a factor however (though it was deemed inappropriate to conduct post-hoc power analysis18). Indeed, there were near medium-sized effect sizes (d ≥ 0.45) for many of the executive tests we anticipated improvements on including TMT-B, and shift-measures of the affective go-no-go test (AGN) and verbal fluency (VF) (see Table 2). We do note however that when adjusting for the effect of dopaminergic medication, the CT group did show medium-sized significant improvements compared to AC in processing speed and daytime sleepiness.


This study has limitations which warrant consideration. The first is that we did not meet the projected sample size target due to feasibility issues with recruitment. In addition, there were a number of dropouts in the AC group, although we note that over half of these dropouts occurred prior to commencement of the intervention and only one was due a lack of interest. The third issue was that participants were randomized prior to TUG scoring. This was necessary to avoid delay to participants being enrolled as TUG scoring is time consuming, requiring skilled and trained raters. Nonetheless, we attempted to address these limitations by running the FAS analysis, which confirmed our primary result.

Additionally, despite random allocation, the groups were unbalanced in their baseline FoG severity. The CT group had more FoG than the AC in the on and less in the offstate. We stratified the randomization by cognitive functioning (MOCA scores), however it may be more appropriate in future to stratify by objective FoG scores at baseline. We highlight however that the results remained when accounting for the impact of LEDD, and that the on-state is the clinically meaningful state. Related to this unbalanced severity, it is important to highlight that the AC group actually had worse FoG at follow-up compared to the CT group in the on-state and this pattern of results was replicated in the FAS analysis. Replication with a larger sample is needed to demonstrate if this is a reliable finding, or represents the variability found in small samples such as this one.

Future directions

We believe there is reason to be hopeful for the use of these trials in the future. Feedback from participants and family members involved in the groups was overwhelmingly positive, our pilot results highlight positive trends, and the importance of nonpharmacological trials including CT has become increasingly clear.7,19,20 We suggest that replication of this trial is warranted. However, with the hope of improving any future work learning from some of the issues that were raised during this study, we suggest authors consider some of the following suggestions.

Future studies where possible should aim to score FoG severity prior to enrollment. A certain threshold for severity (e.g., >5%) should be specified for eligibility, and stratification across groups could also be based on this. Where possible, additional methods of FoG measurement could increase the reliability of %FoG scores. This could be done through measures such as gait mats and accelerometry data, and also repeat TuG assessments to address measurement variability. Multisite recruitment would increase the potential for sample size without relying on home-based CT, which we do not believe would be a viable option,21 particularly in this sample. The inclusion of additional data to aid analysis such as measurement of expectancy effects and CT training data can be useful. Finally, we did not include a long-term follow up assessment. This has often been used as a criticism against CT, though we rebut that very few interventions elicit sustained improvements after the cessation of treatment. Thus, it is likely that clinically, CT needs to be continuously delivered in order to continue any found benefits, just like most other interventions (e.g., exercise, medications etc). Nevertheless, obtaining a better understanding of how long such results are maintained22 is useful for future trial design and clinical applications, and thus future studies could try to obtain this information if feasible.


The current study provides preliminary evidence that CT can reduce the severity of FoG in PD during the on-state. This improvement was seen without concurrent, significant changes to executive functioning (despite near-medium sized effects on these measures), but in the context of improved processing speed and daytime sleep disturbance. Despite the limitations of this study, these results add to the growing body of evidence showing that CT is a useful therapeutic technique worthy of continued exploration in PD.


Study registration

This study was registered in 2013 on the 5 April through the Australian and New Zealand Clinical Trials Registry (ACTRN12613000359730) and was approved by the Human Research Ethics Committee of the University of Sydney. Written informed consent was obtained from all participants.


Eligible participants were those diagnosed with idiopathic Parkinson’s disease based on the UK Brain Bank clinical criteria,23 with self-reported FoG at the time of assessment, and who were free from dementia as determined by a score of ≥24 on the mini-mental state examination (MMSE).24


The study was advertised in a local PD community magazine as well as local PD community support groups. Potential participants were also recruited from the Parkinson’s Disease Research Clinic at the Brain and Mind Centre, University of Sydney. Interested participants were invited to participate if they had previously reported a positive score on Question 3 of the Freezing of Gait Questionnaire (FOG-Q): “do you feel that your feet get glued to the floor while walking, making a turn, or trying to initiate walking (freezing)?”.25

Prior to recruitment, we used baseline data from a previously published trial26 to conduct a power analysis using a conservative effect size estimate of at least 0.2 in the study’s primary outcome. This suggested the minimum sample size required for each group was 39 (based on power = 0.80 and α = 0.05).

Study design

The study was a double-blind randomized active controlled trial. Interested patients were enrolled by CCW and LM after they met eligibility criteria during a baseline assessment and were then randomized into either the CT or an AC group. Conditions were masked as either “morning” or “afternoon” sessions, and the order of these was randomized between recruitment waves prior to trial commencement. In order to facilitate blinding, participants were told that each session involved different computerized activities, but were not explicitly told of a treatment or control group. Randomization of participants and morning/afternoon sessions was carried out using a randomly generated number sequence allocated by a blinded researcher not involved in trial recruitment, data gathering, assessments or training. Randomization was undertaken using permuted blocks and stratified by cognitive functioning, with strata defined by Montreal cognitive assessment (MOCA) scores of <26 or ≥26. Participants were advised of their allocation into the morning or afternoon session by way of sealed opaque envelopes delivered by CCW upon completion of the baseline assessments. Post-intervention assessments were conducted by clinicians who were blinded to treatment allocation. All participants allocated to the AC group were offered the opportunity to complete CT after their involvement in the trial was complete. Ten participants elected to complete this, with those who declined citing time commitments as the primary reason.


Baseline and post-intervention assessments were each completed in two parts: on and off-medications. The onstate assessment included a neuropsychological test battery, psychosocial measures, part III of the Movement Disorders Society’s revision of the Unified Parkinson’s Disease Research Scale (MDS-UPDRS),27 and a modified timed up-and-go (TUG) task. These assessments were completed in a random counterbalanced order and took approximately 2.5 h to complete. Baseline assessments were conducted within 3 weeks prior to training commencement, and follow-up assessment was within 3 weeks of the intervention finishing. The practically defined offstate assessment was completed in the morning on a different day when participants were asked not to take their usual Parkinson’s medication until after the assessment, and comprised a repeated MDS-UPDRS part III and TUG, taking approximately 1 h. Those with deep brain stimulation did not complete the off-state assessment. A random subset of participants also underwent neuroimaging, however the investigation of any training-induced changes are a tertiary outcome and are therefore not included in the current manuscript.

Primary outcome

The primary outcome was percentage of time spent frozen (%TF) across all four trials of a TUG assessment. Percentage was chosen as the primary outcome as it was anticipated to be more sensitive than a reduction on the FOG-Q, whilst accounting for inter-individual variability in gait speed and the variable duration of freezing episodes across TUG tasks.26 In each task, the participant was requested to get up from a chair, walk to a square box shape taped to the floor five meters ahead and complete both a left and a right turn (see Fig. 2). TUGs were video recorded and then scored independently post-assessment. Six scorers (MG, JMH, AJM, MG, JYYS, and KAEM) were randomly distributed videos of the TUGs. Scorers were given baseline and follow-up TUG videos in a random order for the same participant, to minimize pre-post scoring variability. FoG was tagged in the video at any point when a participant made a paroxysmal and involuntary cessation of normal progression of the feet through the task. This included a typical trembling of the feet, short shuffling steps of a few centimetres in length or a complete motor block.28

The %TF outcome was calculated by summing all FoG episodes across the four conditions, and dividing by the total time to complete across all conditions. Inter-rater variability amongst blinded scorers was strong, and calculated by all scorers being given a random selection of the same six videos to independently score. The intraclass correlation coefficient was .902 across all FoG episodes (average %TF: 9.56%). We note that two of the six videos by chance did not contain FoG, however they were included to confirm no false-positive scoring had occurred. As this inflated reliability across scorers however, we re-calculated the coefficient with the two videos removed to be sure. Scoring was still accurate across raters (.865) (average %TF: 14.31%).

Secondary outcomes

Cognitive assessment

To assess global functioning for descriptive purposes, the MMSE24 and MOCA29 were used. Total and delayed Hopkins verbal learning test-revised scores were used to assess verbal memory.30 To assess attention and working memory, the total score from the Digit Span subtest of the Wechsler Memory Scale was used.31 To assess verbal fluency, the total words generated on each condition of the VF subtest of the Delis–Kaplan executive function system was used.32 In this test, part 1 measures phonemic fluency across three letters, part 2 measures semantic fluency across two categories while parts 3 and 4 assess switching between two differing categories. Part 3 represents the total number of correct items, while part 4 represents the total number of correct switches. Processing speed was assessed by the number of correct responses on the oral symbol digit modalities test.33 For these measures, higher scores are indicative of better performance.

To assess processing speed and cognitive flexibility, times for parts A and B respectively of the trail making test (TMT) were used.34 Finally, the AGN of the Cambridge neuropsychological test automated battery35was used as a computerized measure of inhibitory control/switching. Mean latency post-switch was used to determine performance in addition to the number of commissions or omissions. For these tests, lower scores were indicative of better performance. Alternate versions of each test with the exception of Digit Span (not available) were used at baseline and follow-up to minimise potential practice effects.

Psychosocial measures

Participants completed several questionnaires targeting mood and wellbeing. The Hospital Anxiety & Depression Scale was used to assess anxious and depressive symptoms.36 The Scales of Outcomes in PD (SCOPA)-Sleep was used to assess sleep quality in terms of both daytime sleepiness and night time sleep disturbance.37 The Parkinson’s Disease Questionnaire (PDQ-39)38 was used as a measure of quality of life. Finally, if a participant lived with a carer, the Cambridge behavioral inventory-revised39 was used as an informant report of cognitive and behavioral changes. For all of these scores, a higher score was indicative of more substantial impairment.


Both the CT and AC groups attended sessions at the Brain and Mind Centre at the University of Sydney in our designated CT laboratory, and both conditions were matched in terms of time, clinician contact, computer use and social interaction. In accordance with our previous CT programs for older adults (see,40,41) the intervention was completed in a group format (n ≤ 10), and comprised of 2-h sessions, twice weekly over 7 weeks (14 sessions in total). Both groups were supervised and facilitated by CCW & LM. The first hour of the session was identical across CT and AC groups: (i) 30–45 min was designated to psycho-education on a number of topics relevant to PD including cognition, sleep and mood, and was delivered by multidisciplinary specialists and researchers from the Brain and Mind Centre; (ii) participants then took an enforced break of 10–15 min. This first hour, whilst not CT per-se, was included for both conditions as a means of increasing participant engagement and has previously been shown to support our excellent program adherence rates.40,41 The second hour of the session differed across CT and AC groups:

(A) CT: Participants in this group completed a program of computerized CT tasks, selected for their focus on executive functions and on the basis of our previous experience employing the “Neuropsychological Educational Approach to Remediation” approach41,42 in providing computerized CT to >400 older adults,41,43,44 including those with PD.40Tasks included designated “brain training” programs (e.g., Attention Process Training-III45) as well as commercially-available software (e.g., computerised Sudoku), which were determined by clinical neuropsychologists (LM, SLN) to target the cognitive processes of most interest to FoG (inhibitory control, attentional set-shifting, working memory, processing speed and visuospatial skills).4,12 Performance was monitored by the facilitators, with the focus of progressively making the tasks more difficult where possible for the participant. These changes were done in an individualized manner based on performance and in consultation with the participant. Therefore while the tasks delegated for each session where standardized across all participants, there were differences in how far each progressed in terms of difficulty. The majority of exercises provided the participant with feedback in the form of scores, and this was further discussed between facilitators and participants to help them better understand their performance.

(B) AC: Participants in this group completed a series of non-specific computer-based tasks including predominantly watching informative nature videos and answering content-related questions as previously used,46 as well as online “treasure hunts” devised by our team. These tasks were designed to provide broader, generalized cognitive engagement compared to the targeted focus on specific cognitive functions in the CT condition. Therefore, those in this group were not expected to have reduced FoG severity but were rather intended to match for clinician and peer contact, along with computer use.

Statistical analysis

To minimize any potential bias, statistical analyses were conducted by a consultant statistician experienced in RCTs who was based at the University of Sydney (see acknowledgements) and who was not involved in any other aspect of the trial. Data was analysed using SAS software version 9.4.

The analysis took the form of a mixed effects model using fixed effects fitted to all endpoints, to test the null hypothesis of no difference in change over time across groups against the alternative hypothesis of a difference between the two arms. An additional term in the model was fitted to account for the repeated measures pre- and post-intervention. “Participant” was declared a random effect. An unstructured covariance pattern between baseline and post-intervention was used. The Kenward and Roger’s method for correcting for the fixed effects, standard error bias by inflation of the variance and Satterthwaite’s adjustment to the degrees of freedom has also been applied to cater for the small sample size. Analyses of endpoints with non-normal variance for analysis have been transformed to the log10 scale for analysis. Results were back transformed for interpretation and represent geometric means. Cohens d was calculated as a measure of effect size with 0.2, 0.5, and 0.8 considered small, medium and large effects in the CT compared to the AC group.47

Covariate analysis

An additional analysis was undertaken to investigate the effects of the following covariates on all outcome measures: age, education, levodopa equivalency daily dose LEDD;48 years since diagnosis, and the amount of days between CT completion and FU assessment (days until FU) on each outcome. Only significant results are reported.


Data availability statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

What Comes After The #Wearable Health Revolution?

The wearable health trackers’ revolution has been going on producing devices that let us measure vital signs and health parameters at home.

It is changing the whole status quo of healthcare as medical information and now tracking health are available outside the ivory tower of medicine.

2014 report showed that 71% of 16-24-year-olds want wearable technology. Predictions for 2018 include a market value of $12 billion; a shipment of 112 million wearables and that one third of Americans will own at least a pedometer.


Now a growing population is using devices to measure a health parameter and while this market is expected to continue growing, devices are expected to shrink, get cheaper and more comfortable. At this point, nobody can be blaimed for over-tracking their health as we got a chance for that for the first time in history. Eventually, by the time the technology behind them gets better, we should get to the stage of meaningful use as well.

Let’s see what I can measure today at home:

  • Daily activities (number of steps, calories burnt, distance covered)
  • Sleep quality + smart alarm
  • Blood pressure
  • Blood oxygen levels
  • Blood glucose levels
  • Cardiac fitness
  • Stress
  • Pulse
  • Body temperature
  • Eating habits
  • ECG
  • Cognitive skills
  • Brain activities
  • Productivity
  • I also had genetic tests and microbiome tests ordered from home.


What else exists or yet to come? Baby and fetal monitors; blood alcohol content; asthma and the I could go on with this list for hours.

The next obvious step is designing smaller gadgets that can still provide a lot of useful data. Smartclothes are meant to fill this gap. Examples include Hexoskin and MC10. Both companies are working on different clothes and sensors that can be included in clothes. Imagine the fashion industry grabbing this opportunity and getting health tracking closer to their audiences.


Then there might be “insideables“, devices implanted into our body or just under the skin. There are people already having such RFID implants with which they can open up a laptop, a smartphone or even the garage door.

Also, “digestables“, pills or tiny gadgets that can be swallowed could track digestion and the absorption of drugs. Colonoscopy could become an important diagnostic procedure that most people are not afraid of. A little pill cam could be swallowed and the recordings become available in hours.

Whatever direction this technology is heading, believe me, I don’t want to use all my gadgets to live a healthy life. I would love to wear a tiny digital tattoo that can be replaced easily and measures all my vital signs and health parameters. It could notify me through my smartphone if there is something I should take care of. If there is something I should get checked with a physician.


But what matters is finally I can become the pilot of my own health.

Right now patients are sitting in the cockpit of their planes and are waiting for the physicians to arrive.

Insurance companies such as Oscar Health have touched upon this movement and offer incentives and rewards (e.g. Amazon gift card) if the patient agrees to share their data obtained from health trackers. This way motivating the patient towards a healthier life.

There is one remaning step then, the era of the medical tricorder. Gadgets such as Scanadu that can detect diseases and microbes by scanning the patient or touching the skin. The Nokia Sensing XChallenge will produce 10 of such devices by this June which will have to test their ideas on thousands of patients before the end of 2015.


I very much looking forward to seeing the results. Until then, read more about health sensors and the future of portable diagnostics devices in my new book, The Guide to the Future of Medicine.


A sudden loss of wealth may be hazardous to your health

A sudden loss of wealth may be hazardous to your health
Traders on the floor of the New York Stock Exchange contemplate a big loss. Americans who experienced a sudden and substantial loss of wealth found themselves facing an increased risk of early death, according to a new study. (Richard Drew / Associated Press)

Your financial health may have more bearing on your physical health than you realize.

American adults who experienced a sudden and substantial loss of wealth were 50% more likely to die in a 20-year period than were others in their age group whose financial picture remained relatively stable, or improved.

As bad as things were for those who experienced a “negative wealth shock,” they were even worse for Americans who didn’t have any wealth in the first place. These folks were 67% more likely than their financially secure counterparts to die during a 20-year study period.

The findings, published Tuesday in the Journal of the American Medical Assn., suggest that you should treat your bank account balance as a vital sign.

The same goes for the value of your stock market investments, your individual retirement account, your home, your vehicles, your business or your “other substantial assets.”

Researchers tallied all of these things for a group of 8,714 Americans who participated in the Health and Retirement Study. They were born between 1931 and 1941 and were tracked from 1994 until death or 2014, whichever came first.

The study authors calculated the net worth of each participant in 1994 and updated that figure every other year. People were judged to have experienced a negative wealth shock if their net worth fell by 75% or more in a two-year period.

This sad fate befell 2,430 — or about 1 in 4 — of the study participants. Among them, their median loss was 92% of their net worth, which amounted to a median value of $101,568.

This sudden loss could have caused stress, inflammation and/or high blood pressure, any of which could make serious cardiovascular problems more likely, the study authors noted. In addition, a financial blow of this scale may well have prompted people to skip important (but expensive) medical appointments or to stop filling necessary (but pricey) prescriptions.

The Health and Retirement Survey data didn’t provide researchers with clarity on these points. But it did show that the mortality rate for this subset of participants was 64.9 deaths per 1,000 person-years. That was more than double the mortality rate for those in the financially stable control group (30.6 deaths per 1,000 person-years).

After the researchers adjusted for factors such as race and ethnicity, age, education level, body mass index and smoking status, they calculated that the risk of dying between 1994 and 2014 was 50% higher for those who experienced a negative wealth shock. This held up regardless of how much money people had at the start of the study. However, the more risk-averse a person was, the stronger the association between a negative wealth shock and the risk of early death.

“The approximately 50% relative increase in all-cause mortality that follows a financial loss of this magnitude is similar to the increase associated with a new diagnosis of coronary heart disease,” according to an editorial that accompanied the study. “It is sobering to contemplate that mortality rates were even greater among people who had no assets to lose.”

For the 749 people who began the study with no net assets or a negative net worth, the mortality rate was 73.4 deaths per 1,000 person-years. After adjustments, their risk of premature death was 67% higher than for those in the control group, the study authors calculated.


Carri’s Corner:  The Secrets to Billing for Diabetic Shoes

This month’s Carri’s Corner concentrates on the Secrets to Billing for Diabetic Shoes. I am supplying a link to a wonderful article that was posted in Podiatry Today magazine in 2011. It contains everything you need to know to be able to confidently bill for diabetic shoes. It also explains precisely what documentation you would need to have on hand. Many DPM’s still struggle with getting paid for these shoes. This article is filled with valuable information and will save you the trouble of having to research this complicated issue. I hope it will assist you with the daunting task of billing for diabetic shoes.