Medicare, Telehealth and congress

Medicare annual spending on Telehealth is around $5M. Compare that number with $505B annual spend for covering about 60 million Americans and you get the idea that Telehealth is not a prevalent service. CMS stipulates allowable originating sites of care  to be where the patient physically is. Like the home which is where most patients end up recovering. It is arguably their preferred place to be. Care can and should be provided where the patient is and it should be cost effective and clinically approved.

At the moment Telehealth is only available to Health Professional Shortage Areas and not in larger metropolitan areas where population is more dense. By eliminating this restriction there will most probably be an overflow of service requests where medical services and consultation could be conducted from the comfort of one’s home.

Currently Telehealth is restricted to voice and video. Congress can aid by expanding and adding other modalities to active monitoring with the use of wearables and sensors. This will require an expansion of CPT and HCPCS codes to actually cover these real time and on going services.

Lastly congress can consolidate states and federal offices in normalizing the licensing process to allow physicians to care for patients beyond their state’s borders. This will elevate demand from physicians in larger and more dense populations and also provide patients with choices.

There are 6 acts under way which are relevant to Telehealth:

Interstate Telehealth licensing

The Federation of State Medical Board (FSMB ) will enable physicians to practice across state lines. This compact was adopted by legislators in Alabama which is the 7th state required to enact it. 11 other states have the act pending approval.

Medicare Telehealth parity act 2015 – H.R.2948

This act discusses the expansion of Telehealth in 3 phases:

Phase 1

Expands what sites qualify as “originating site” to include federal health centers, rural health clinics, and counties in Metropolitan areas with populations less than 50,000.

Include services provided by diabetes educators, respiratory therapists, audiologists, occupational therapists, speech language therapists, and physical therapists.

Phase 1 also provides Medicare coverage of asynchronous (store & forward) Telehealth services across the country (beyond Alaska and Hawaii).

Phase 2

Expands qualifying originating sites to include a “Home Telehealth Site”, to include counties in Metropolitan Areas with populations of 50,000-100,000.

Phase 3

Originating locations definition expanded to include counties in Metropolitan areas with populations above 100,000. CMS is also authorized to develop and implement new payment methods for these services.

ACO improvement act – H.R.5558

An act to improve the ACO model by providing additional incentives based on quality of care and increasing collaboration between patients and physicians. Among the financial incentives for performance, this act includes patient’s option to choose their primary care provider, a nurse practitioner or physician assistant in rural and underserved areas.

Telehealth enhancement act – H.R.3306

This bill and it’s revised 2014 version, will add Medicare home health payments to remote monitoring services and is aim to expand coverage to all critical access and sole community hospitals. It would also cover home-based video services for hospice care, home dialysis and homebound beneficiaries and allows states to set up high-risk pregnancy networks.

what google brillo means for healthcare IT

today google announced brillo, their IoT operating system based on android and it’s matching communicating protocol. brillo will be  live later this year and so will weave, the communication protocol. google is now making moves to join apple and microsoft, a timely move.

brillo and weave will work well on light weight devices, like cameras, door locks, etc. essentially home devices. in the growing market of health related devices and monitors, this announcement means another stride in that direction, in the shape of affordable scales, wearables, remote monitoring, telehealth and other means to collect specific and relevant information from patients where they are.

with small computers scaling up, a low memory foot print we should expect a wave of innovation when it comes to healthcare related products.

patient engagement

doctor-and-patientpatient engagement as defined by healthcareITnews.com:

“Patient engagement refers to ongoing and constructive dialogue between patient and practitioner. Within the scope of healthcare IT, patient engagement is driven by technology ranging from patient portals, which enable patients to view test results and records online and communicate with doctors, to electronic data capturing platforms that result in more accurate and streamlined diagnostic information. A high emphasis has been placed on patient engagement in Stage 2 meaningful use.”.

the healthcare system in the US is wonderful in focusing on one thing that has gone wrong and fixing it. it is not so effective in dealing with comorbidities and not so much when it comes to taking daily actions to maintain a state of harmony. mobile devices seem like the right channel to revolutionize the current state. can healthcare be as engaging as our digital social lives? can the same social circle serve as a catalyst to push us in the right direction when it comes to making healthier choices?

why does it seem like we are not getting patient portals and PHR right? to start off, the sicker the patient the less likely they are to use the portal. to put this in perspective, 78% of physicians use EHR while 17% of patients use portals (this data dates back to 2013 by Research Ancker). it is clear that patients do not see the value of the portals yet nor gaining access to their PHR. what is the right model to bond the patient and provider together?

considering that PHR is mostly provider facing, i.e. they are laid out and detailed in a clinical way which makes more sense for the physician than the patient. take clinical notes for example which include terms which do not make much sense to the patient.

but hold on for a second. aren’t the patient already so very much engaged? they suffer the pain, go through the medical procedure, pay the bill… patient are indeed engaged as they fill out the same form multiple times as they visit different departments within the same hospital. as they are expected to accurately name the medications they are taking and the dosage. isn’t it the system that, inadvertently, discourages engagement?
today providers are almost exclusively responsible for deciding a patient’s treatment, next site of care, medications, etc. seeking patient engagement, those key decisions should include the patients and their families, where they can voice their opinion, participate and share responsibility.

consider nutrition and financial wellness as important gaps. patient should know who is responsible for delivering their care. critical data elements can be tied to services rendered to improve outcome in a clear and concise way. regardless of a patient’s spoken language, in order to engage them they should be able to specify clearly and effectively specify what is going on, why they need help and know that someone got their message and is “working on it”. all that within as little clicks as possible.

patient engagement may be the holy grail when it comes to improving outcome and reducing costs. having the patient and their family and friends participate in the process in a meaningful way and be proactive in a timely manner can assist advancing the healthcare system in our country forward.

 

 

the necessity of software design

when designing software many aspects come into play and design plays an important role as part of the decisions made at the product level. there is a balance between effort and flexibility that is sometimes linear: quick and simple will hit a wall at some point and will most probably will not scale nor will it be easy to change key features and/or layout. these design decisions extend well beyond a choice for the right MVC backend and frontend framework, nor does it apply to the choice of the data warehouse and wether it should be relational or pure noSQL or a hybrid.

as a product one builds starts having more users interact with it and data is accumulated – requirements change and evolve constantly. staying agile in that respect makes sense, especially in the initial startup phase. in general, issues that may come up can be mitigated if the product is well designed. like everything else in the world software, there is a delicate, zen like balance between effort and efficiency.

dude where is my code?

so the users LOVE the right side bar and would like to add a couple of more navigation items. awesome. hmm… which file is that menu created? let me quickly consult with the guy who coded it… oh wait, he is no longer with the company… you know what, i can easily start the debugger and move about the code until i see something that makes sense… common practice? probably. efficient? certainly not. rule of thumb is, that it should be obvious as daylight where changes should be made and everyone maintain these rules, otherwise things get so messy that the most simple improvements are a pain. good code organization and well documented methodologies that are taught to the team and well explained are an important step

context and boundaries

so you have found the code segment where you think the changes should live. great.  before you refactor it , answer this question: are you clear on what this code is suppose to do? the expected input and output? is it clear what the code is NOT suppose to do? each code segment should live on it’s own, unit testing and all, where it’s very clear what this method is aimed to achieve. no guessing.

software fragility

nir i’ll be happy to take care of that ticket. just know that if i change that controller the other view will break and we’ll have to take care of that and then there’s this other controller… unfortunately it’s common to see software break incredibly with minor changes (especially frontend). this is why both unit and functional testing are critical, but that’s for another post.

development scalability

when one developer’s code commit breaks one function, ideally it will not impact other developers who are working on other regions of the code. ideally code integrations will include less merge conflicts and will be resolved smoothly. this is one good reason to switch to git if you haven’t done so yet.

deployment

updating your product maybe a very delicate process, especially if the model changes and you need to update the current schema and still be backward compatible to hold the current data sets for a live target.  now consider your product is white labeled to 50 customers where different version of the products are deployed to different servers, and ideally you may require to roll back or quickly update critical bugs. the complexity of maintaining these machines grows exponentially without a proper deployment strategy.

productivity

this is one important metric to wrap your heads around and be able to benchmark your team. as you build your system what is the effort required to add this specific feature? is the investment linear in terms of time and capital? does good design help people be more productive?

complexity

as the system grows the requirements will change because you get good feedback from your end users and what you thought you knew a year ago has not turned obsolete. supporting multiple and different code bases, the requirement for higher availability, redundancy, better performance and backward compatibility are all requirements that come down the pipe as your platform and client base grows.

some final thoughts… when it comes to design we now know that some prior planning and careful thinking can go a long way.

the bottom line is that the product owner needs to stay focused on what the business requirements are. the product should only solve the problems it is designed to solve so the time dedicated to development is better spent.

 

 

healthcare and big data

everywhere you turn people are talking about big data, hadoop and sharding. rightfully so. in today’s day and age managing a lot of data is not an easy task, as performance and scalability are key. traversing large data sets, dividing them into tiny sections and distributing the load among many machines (processors) is nothing new.

hadoop exists in order to solve specific problems and has emerged out of necessity. what hadoop does is provide the infrastructure to connect multiple (cheap) servers into a coherent environment with which high i/o and cpu problems (algorithms) are solved.

it all started in 2004 when doug cutting of google released his document indexing project called lucene and decided to have it possible to achieve the same goals in distributed environment. hadoop BTW is his sun’s yellow elephant toy. in 2006 yahoo hired doug to improve his project so it can index the entire internet and made the project open source. that day marked the start of the revolution.

at it’s core hadoop includes two projects: one for distributed storage and one for distributed computing. around those two projects a vast of projects have evolved (and still are).

HDFS: hadoop distributed file system
this file system is designed to store large files and enable large and effective r/w. this is done by dividing the file into sizable chunks, while each chunk is normally stored on 3 nodes which can be anywhere. there is a “name node” that runs the mapping between a document and it’s constituent pieces and the data nodes on which they are stored on.

mapReduce:
an API to write programs that will run in a parallel.  the developer really needs to write two simple functions: map and reduce that handle a single document (i.e. element of data) on multiple machines, when each node is responsible for the timing, handling errors and failures (network, i/o, etc). this allows for simple parallel batching, where a “job tracker” synchronizes the execution of the bach processes, when each one batch is sub divided into smaller tasks which are handled by the “task trackers”.

over time yahoo and facbook (to mention a few) wrote their own drivers over HDFS and mapReduce and have shared their work with the community. so hadoop is a code name for a set of technologies who harnesses the computing power of many machines to perform simple tasks in parallel. hadoop emerged from the world of un structured data where hundreds of millions of pages are analyzed. today big data is being implemented and researched in every facet of the economy, including healthcare.