Hospital Incident Reporting Systems: Time to Slay the Beast

By  |  September 20, 2009 |  15 

When the patient safety field began a decade ago with the publication of the IOM report on medical errors, one of its first thrusts was to import lessons from “safer” industries, particularly aviation. Most of these lessons – a focus on bad systems more than bad people, the importance of teamwork, the use of checklists, the value of simulation training – have served us well.

But one lesson from aviation has proved to be wrong, and we are continuing to suffer from this medical error. It was an unquestioning embrace of using incident reporting (IR) systems to learn about mistakes and near misses.

The Aviation Safety Reporting System, by all accounts, has been central to commercial aviation’s remarkable safety record. Near misses and unsafe conditions are reported (unlike healthcare, aviation doesn’t need a reporting system for “hits” – they appear on CNN). The reports go to an independent agency (run by NASA, as it happens), which analyses the cases looking for trends. When it finds them, it disseminates the information through widely read newsletters and websites; when it discovers a showstopper, ASRS personnel inform the FAA, which has the power to ground a whole fleet if necessary. Each year, the ASRS receives about 40,000 reports from the entire U.S. commercial aviation system.

In the early years of the patient safety movement, the successes of the ASRS led us to admonish hospital staff to “report everything – errors, near misses, everything!” Many caregivers listened to these admonitions (particularly nurses; few docs submit IRs, which leads IR systems to paint incomplete pictures of the breadth of hospital hazards) and reporting took off. At my hospital (UCSF Medical Center), for example, we now receive about 20,000 reports a year.

Yes, 20,000 reports – fully half of what the ASRS receives for the entire nation! And believe me, we don’t report everything. If we really did, I’d estimate that my one hospital would receive at least five times as many IRs: 100,000 yearly reports.

But even at 20,000, recall that we are only one hospital among 6,000 in the United States. Since we’re a relatively large hospital, let’s say the average hospital only collects one-quarter as many IRs as UCSF, 5,000/year. That would amount to 30 million reports a year in the United States! (Oh yeah, and then there are SNFs, nursing homes, and all of ambulatory care, but let’s leave them out for now.)

Is this a problem? Yep-per, it is. First of all, IRs are all-but-useless in determining the actual frequency of errors, though they’re often used for this purpose. When I visit hospitals to talk about patient safety, they often show me their IR reporting trends. If the number of IRs has gone up over the past year, they breathlessly proclaim, “This is great. We’ve succeeded in creating a reporting culture – the front line personnel believe that we take errors seriously. We’re getting safer!”

That would sound more credible if hospitals with downward trends didn’t invariably shout, “This is great, we have fewer errors! Our efforts are paying off!”

The point is that we have no idea which one is true – IRs provide no useful information about the true frequency of errors in an institution.

But that isn’t their major flaw. The bigger problem is that IRs waste huge amounts of time and energy that could better be used elsewhere in patient safety (or in patient care, for that matter). Let’s return to my hospital for a moment (and let me apologize to those who thought there would be no math). I’d estimate that input time for the average IR is about 20 minutes (the system requires the reporter to log in, and then prompts her to describe the incident, the level of harm, the location, the involved personnel….).

Once an IR has been submitted, it is read by several people, including “category managers” such as individuals in charge of analyzing falls or medication errors; the charge nurse and the doctor on the relevant floor; and often a risk manager, the patient safety officer, and more. These individuals often post comments about the case to our computerized IR system, and some IRs generate additional fact finding and analyses. I’d estimate that this back-end work comes to about 60 minutes per IR.

In other words, each of our IRs probably generates an average of 80 minutes of work: 20 minutes of reporting and 60 minutes of reading/analysis. For our 20,000 IRs per year, that’s 26,667 hours of work. (Of course, we could shave this number by doing nothing with the submitted IRs – a recent study found that this is precisely what happens in about one-in-four U.S. hospitals, which don’t even bother to distribute IRs to hospital leaders or managers. Sounds like something out of Catch-22 or The Office).

If we value the time of our people doing the work of reporting, reading, analyzing, and acting on IRs (an amalgam of nurses, quality and risk managers, and a few physicians) at an average of $60/hour (salary and benefits), we’re talking about a yearly investment of $1.6 million in my one hospital. Nationally, for 30 million reports, the cost (of 40 million hours of work) would be $2.4 billion! Now we’re talking about real money.

Even that expenditure (which is 50 times more than AHRQ spends on patient safety research yearly) wouldn’t be so horrible if this work was yielding useful insights, but, for the most part, it’s not. My colleague Kaveh Shojania recently wrote a terrific piece entitled “The Frustrating Case of Incident-Reporting Systems,” in which he argued that, while all events should be reported…:

Many incidents, even if important (e.g., common adverse drug events, patient falls, decubiti) do not warrant investigation as isolated incidents. In such cases, the IR system should simply capture the incident and the extent of injury to the patient, not barrage users were a series of root cause analysis-style questions about the factors contributing to these events.

This is a great idea but I’d go one step further, to a system I’ll call, “If It’s February, It Must Be Falls.” Here’s how it would work:

??I’d limit complete, year-round IR reporting to only those errors that cause temporary (33% of all IRs in one large study) or serious (1.5%) harm, along with a small number of reporting categories, such as the disruptive provider, that require complete data. For the remainder of the categories, I’d switch to a monthly schedule: all medication errors get reported in January, all falls in February, all serious decubitus ulcers in March, and so on…

I’d estimate that this change would cut the number, and cost, of IRs by at least 50%, while having virtually no detrimental impact on the value derived from the systems. Risk managers would still hear about the worst errors, sentinel events would come to light to generate root cause analyses, and a month of complete data for each of the error categories would easily provide sufficient information to explicate more subtle problems. More importantly, caregivers, freed from the “report everything” mantra, would be more enthusiastic about reporting, and hospital leaders and administrators would have the time to analyze the reports and develop meaningful action plans (as well as to focus on other methods of error detection such as Executive Walk Rounds and trigger tools). As Kaveh wrote, ?

…organizations must recognize that the generation of periodic reports from IR systems does not constitute an end in itself. IR systems must stimulate improvement. Achieving this crucial goal requires collection of data in such a way that important signals are not lost amidst the noise of more mundane occurrences and so that hospital administrators do not experience information overload. If submitting incident reports produces no apparent response from hospital administrators, front-line personnel will predictably lose interest in doing so. In addition to undermining effort to monitor for safety problems, lack of meaningful change will negatively impact the culture of the organization in general.

I couldn’t agree more. Our unquestioning support for “report everything” incident reporting systems has created a bureaucratic, data-churning, enthusiasm-sucking, money-eating monster. It is past time we slayed it. Is anybody with me on this?


  1. menoalittle September 21, 2009 at 2:01 am - Reply


    You tackled,this Sunday, a complex subject. A flaw in the logic is that aviation’s complexity does not come close to matching the complexity of the subsurface activity and moving parts of medical care.

    Doctors do not report for three reasons: they are smart enough to realize they are wasting their time because at least a few hospital administrators are inept at understanding the root cause (read HCR’s Roy Poses, MD) and doing anything about it if they did; they fear and come to expect retaliation for frequently pointing out that the emperor is not wearing clothes; or they speak to the administrators privately.

    Is it that they know that administrator response and safety committee meeting agendas by design, attempt to rationalize the adverse event in order to avoid reporting it to the state patient safety authorities rather than rectifying the human resource problems (education and supervision) and budget limitations that often are the root cause?

    The nurses at UC Irvine needed their association and DOH to solve infusion pump overdosing:


    Slay the current IR beast. Replace it with an omnipresent link on the HIT terminals, enabling user friendly reporting and categorization. Feedback to the reporter on resolution ought to be mandatory.

    A new approach to reducing adverse events is urgently needed. With HIT taking over the landscape of medical care, the number of unremediated problems, or incidents caused by such HIT will skyrocket as they have in the avant garde hospitals. Whether it be during implementation or in the chronic HIT state, paradoxically, doctors will be under increasing and enormous pressure to create work-arounds to keep their patients safe.

    It comes as no surprise that HIT and EMR related malpractice claims are on the rise.

    The ONC Chief Blumenthal is concerned enough to tell an AHRQ conference that nobody knows how to correctly implement HIT. More research is needed (really?).

    This reveals that just a few small problems loom on the horizon.

    Should he not have asked, has there been enough safety and efficacy research done on these devices to justify the government mandated and sudden and radical change in the care delivery foundation in the first place?

    Best regards,


  2. John September 21, 2009 at 2:09 am - Reply

    Great post — I am glad someone is saying out loud what others have been thinking for years.

    I’ve been a critical care nurse for 11+ years and if I reported every near-miss I estimate I would spend one hour on each twelve hour shift reporting “incidents.” Consider the following which took place in my unit this weekend (two shifts):
    – Dobutamine 250mg/250ml premixed bag with label “To be infused in one hour” in Pyxis machine (they all had this label)
    – Central monitor in intensive care unit not working
    – Medication ordered but not scheduled in automatic order entry system (why a computer system relies on nursing staff to create a schedule for certain medications (e.g. weekly Epogen, Zaroxylyn to be given before Lasix, every other day digoxin) is beyond me as almost no staff RNs knows how to do this — the informatics person never answered my email about this concern)
    – Instrument count sheet from operation not in chart
    – First year intern ordering PT/INR and not PTT for patient on heparin infusion
    – Environmental reporting that no alcohol-based soap refills available until Monday
    I am sure there are more but I can’t recall them at this time. What is ironic is our hospital is having a state inspection on Monday (announced but that is another story).

    I think the other point that needs to be mentioned is that many staff (at least nurses) use incident reports as a way for (hopefully) disciplinary action being taken. If they have a run in with an intern or the staffing is poor you will hear someone say “That’s an incident report.” I always cringe when I hear this but it is a firmly entrenched mindset that has proven difficult to erase.

    It seems the mindset for many is hard to see system issues and get above the issue of personal responsibility. One of our QA staff is convinced that the answer to incomplete restraint forms is disciplinary action even though the forms are far from intuitive and most people do not fully understand them now that they are computerized. It is really a system issue as I have tried without success to get people to see.

    It seems we can not get above the level of blaming the individual staff when something goes wrong. About a year ago my family doctor told me that one of her patients had been admitted to the hospital and received two units of blood inadvertently. At first it looked like the RN staff had made a mistake by giving this blood to my family doctor’s patient instead of her roommate. When I mentioned this to my coworkers the response was nearly universal — these nurses should be fired. I played devil’s advocate arguing it could have been any number of things and not simply carelessness that caused the error. In fact this was the case (as was later learned after the incident was examined in greater detail) — the intern who ordered the blood did order it for the wrong patient (so the nurses were not wrong) as there was a language barrier and the intern did not fully understand the attending physician’s orders yet did not seek to clarify them.

    The only thing I fear about errors is the fixes that some QA people come up with are just as bad and do nothing to really address the problem (trust me as nurses are quite ingenious as finding work arounds) . I can only hope that a lot of the research in human factors can be translated into better and safer environments in hospitals and other health care settings. Too often we are still thinking in the past and that simply has not worked.

  3. Judy September 21, 2009 at 3:27 pm - Reply

    Bob, in light of this post, would your position/concerns be the same with regards to the Patient Safety Authority’s (PSA) reporting system that requires hospitals in Pennsylvania to report many of their incidents into PSA’s reporting system. This external reporting is in addition to each hospital’s reporting of incidents into their internal incident reporting systems. This data is then used to publish the PSA’s monthly Patient Safety Advisory in which the “emphasis is on problems reported to the Pennsylvania Patient Safety Authority, especially those associated with a high combination of frequency, severity, and possibility of solution; novel problems and solutions; and those in which urgent communication of information could have a significant impact on patient outcomes.”

  4. Bob Wachter September 21, 2009 at 6:45 pm - Reply

    Yes, Judy — I feel precisely the same way about the Pennsylvania system (which, at last count, has collected nearly 1 million reports). This is not to malign the good people in the PA Patient Safety Authority, who are doing all they can to make use of these mountains of data — and who have made some useful observations (for example, this report on retained foreign bodies in surgery). But it is to say that, if “report everything” is a bad idea at an individual hospital level (which it is), it is a terrible idea at the state or federal level.

    Most states have not followed Pennsylvania’s lead, instead focusing on mandatory reporting of only those adverse events that appear on the National Quality Forum’s list of “never events.”  The states that require reports of these events (27 states at last count) find that they receive a few hundred reports per year, a far more manageable number than Pennsylvania’s hundreds-of-thousands.

    I think these types of reporting systems do some good. I’m not convinced that the state analysis of these data are nearly as important as the internal change that reporting generates, as hospitals are moved to improve their analytical capabilities and to take action because of the threat of state fines or site visits after they report these cases to their state.

  5. Jan Krouwer September 22, 2009 at 3:12 pm - Reply

    There are two ways to reduce the rate of medical errors.

    Lower the probability of errors that have not yet occurred
    Lower the rate of errors that have occurred.

    The Aviation Safety Reporting System is an example of tackling #2.

    Many (most) errors do not directly cause harm – they have the potential to cause harm. This can be understood be mapping out each medical procedure. To suggest to not report all errors will shortchange the system.

    The most important errors receive focus by using a Pareto chart or table. The fastest way to reduce error rates relies on a suitable ranking system.

    All of this takes time, training, and commitment.

    Some successful medical examples: anesthesiology improvements in the 70s 80s (1). The recent reduction of infections in placing central lines (2).


    1. Cooper JB, Newbower RS, Long CD, McPeek B: Preventable anesthesia mishaps: A study of human factors. ANESTHESIOLOGY 1978; 49:399-406. An online version of Paper 5 can be found at

    2. Pronovost P. et al. An Intervention to Decrease Catheter-Related Bloodstream Infections in the ICU. N Engl J Med 2006;355:2725-32

  6. Roanld Hirsch, MD September 22, 2009 at 8:43 pm - Reply

    So.. how do you define the end of surgery to determine if a foreign object is left behind? JC says closure of skin incision, surgeons say it’s when the patient leaves the OR. The lack of definitions for many complications and never events complicates reporting.

  7. Bob H. September 24, 2009 at 6:07 pm - Reply

    You were brilliant comparing your hospital data to Feather River Hospital…

    Don’t do away with IRs unless you can implement a simple outcome report to compare hospitals. If I want to pick an airline, I watch for accident reports. Without public reporting, Hospitals can internally rationalize and explain events and want to fix it themselves — what can we report outcomes for consumers. I’d be happy to start if all hospitals are required to publically report MRSA or Acquired Infections per 1,000,000 patient admissions.

    Just report something simple without justifying why some hospital is either big, or teaching, or complex, or only doing harder cases.

    How can we drive transparency? Remember Feather River! Show the scorecards…

  8. Mark Keroack September 25, 2009 at 5:42 pm - Reply

    Dear Bob,

    I was disappointed to read your recent blog about reporting systems.  Your experiences at UCSF are at variance with much of what we have seen in 7 years of running a multicenter system at University HealthSystem Consortium (the UHC Patient Safety Net).  I’ll admit that your comments are consistent with the recent study by AHRQ and the Joint Commission that showed that most places using these systems do not derive great value from them (Farley et al. Quality & Safety in Health Care 2008; 17: 416).  I also know that there are many besides you who regard these systems with suspicion.  However, I think our experience at UHC is different for a number of reasons.

    First, the management and commentary on events is largely decentralized in the 70 institutions using our tool.  Rather than a mountain of data arriving at a central office, unit mangers review and interpret about 2-3 reports a week, a very manageable number.  We surveyed 1500 nurse managers a few years ago and got over 500 responses.  The great majority felt that the reports they were reviewing were reflective of real events on their units and that they were learning new things about safety as a result.  Well over 100 had made specific changes in work flows in response to event reports (Poniatowski et al. Nsg. Admin Q 2005; 29: 72-77).  In one of our members, nurses are proudly wearing baseball glove lapel pins for “good catches” related to event reports that led to system improvements.  I believe that this decentralized approach has helped increase the mindfulness that is so important in safer systems.

    Second, we have embraced some unusual approaches to data analysis in various data mining studies we have done.  I agree that rate information is unreliable in these systems, and moreover, a given event may be described in a number of ways, since we are dealing with untrained observers.  We have focused instead on identifying clusters of events that indicate potential problem areas, without worrying about the true incidence rate.  Identification of clusters has led to scores of improvement initiatives, including medication error reduction initiatives on specialty wards, identification of equipment problems and modification of communications protocols.  Data has also been used to motivate change using what we call a “tip of the iceberg” analysis.  This is an approach in which we link a limited number of harmful events to similar near misses in order to look at common factors and make the case that there is a system problem. We have also utilized the brief and intensive surveillance approach you allude to in your blog.  By focusing for a short period on a single event type in a few units, one can approximate the true incidence rate and learn from those with low rates.  

    Lastly, we have enforced a common taxonomy from the beginning of our initiative, unlike other systems in which the organization simply installs a piece of software without connection to other organizations.  We have created a community of learners around the tool, and we hold 8-12 web conferences a month, usually with 30-50 attendees, focused on specific roles and responsibilities:  risk managers, pharmacists, data analysts, etc.  On these calls, members share their current successes, ideas and challenges.  This is supplemented by annual meetings and list serves as well as a tremendous staff, who conduct 4 studies a year based on custom requests.  These studies are posted on our web site and are available to anyone with a UHC password, whether or not they use the system.

    I think you can understand why I feel differently than you do about these systems, but I realize you’ll find these arguments largely anecdotal.  The value of a skeptic, I think, is that he holds a proponent to a higher standard of evidence.  When we saw the original Farley study, we contacted AHRQ and RAND, and we have secured funding to repeat the survey under Farley’s leadership among our users.  I think that the robust social networking and analytic expertise that support our tool will lead to a different result than the first study.  If I am wrong, we will be back to the drawing board.  If I am right, I hope the results will persuade you to reconsider your stance, or at least to keep your sword sheathed for a little while longer.

    Best regards, Mark

  9. Bob Wachter September 29, 2009 at 4:41 am - Reply

    Mark — Thanks so much for sharing your experience with the excellent University HealthSystem Consortium (UHC) system (FYI, Mark is UHC’s VP and Chief Medical Officer). To me, the key issue is that all of this reporting and analysis is new for us, and that it is vital that we engage in a thoughtful and iterative approach to making the best use of these systems. I’m pleased that you’re doing that – your learnings will be vital in helping to catalyze progress.

    The last thing I would argue is that IR reporting should be scrapped completely – obviously, we need mechanisms to hear about safety and quality defects from front line personnel, to encourage local problem-solving, to connect front-line concerns to central leaders who control resources and systems, to mine the data effectively, and to address the key issues that arise from these reports. My point is that many IR systems, as they are presently constructed and supported, do this poorly – and that the problems threaten to squander our most precious resource: the energy and enthusiasm of our caregivers and managers.

    I think we can do better, and it is clear to me that, for most institutions (including my own), doing the same thing isn’t the path to enlightenment. My goal in writing this post was to provoke everybody to consider some new approaches.

    Thanks for writing.

  10. Seema Syed October 5, 2009 at 7:42 pm - Reply

    Hello Bob! I a native NYer currently working in Jeddah as the Clinical Pharmacy Manager and Chair of Med Safety in the Pharmacy Dept. Yes, I am a PharmD with several pt safety/quality certs,. To say I was amazed and enlightened by your article would be a gross understatement! For almost every sentence I kept thinking in my head that this is exactly what is happening at our 300+ bed hospital as well! I really appreciate your comments and wholeheartedly agree with you.. we are getting into Data overload and might be overlooking the Data management, something that we wanted to do from day ONE!!!
    Once again, THANK YOU and I would be most interested in sharing information with you all and learning from you all as well!

    Good Luck!!!

  11. Jeff Brown October 19, 2009 at 5:26 pm - Reply

    I work for a human factors research organization and have been increasingly engaged in healthcare safety improvement/risk reduction intiatives over the past decade. Concern with the quality and effect of incident reporting, investigation, and analysis have been a common denominator across these initiatives.

    I concur with the assessment of IR provided by Dr. Wachter and others and would underscore that root cause analyses are not typically informed by investigative methods that unpack the circumstances surrounding an incident in way that allows meaningful analysis and action. Education or training, for example, are implicated too often, as both the culprit and the cure for incidents/mishaps while conditions that render practitioners and patients vulnerable to mishap and injury are untouched. Investigators may readily identify how an individual implicated in an incident deviated from expected practice (“per Policy” or ‘the way I would have done it’), but bring no attention or insight into the very real constraints in the clinical context that compel that deviation. Sending the clinician implicated in the incident to ‘re-education camp’ will do nothing to resolve the conditions of practice that clinicians must adapt to/work around as they care for patients–and which create potential for failure that is rarely identified in advance of a sentinel event, if then.

    We need to slow down and conduct the pragmatic research necessary to learn how to effectively, not just efficiently:

    — implement surveillance strategies and agents
    — enhance problem/anomaly detection, reporting, investigation, and analysis
    — establish and sustain cycles of ‘correction’, effect monitoring, and adjustment

    While the problems with healthcare incident reporting are numerous, we would be remiss if we do not attempt to garner the lessons learned from the large-scale natural experiment that has taken place in healthcare with incident reporting. There are bound to be practices that we will want to understand better and propagate.
    Attaching incident reporting, as an additive bureaucratic endeavor, to healthcare organizations has been expensive and frustrating, but we can and should learn from what has transpired over the past ten years.

    Thanks for raising this issue,

    Jeff Brown

  12. Seema Syed October 26, 2009 at 9:14 pm - Reply

    Jeff, thanks for the great comments.. I would also like to add, that many of these so called deviations from standard practice are due to staffing, workplace issues etc, but we never do correct this real root cause..wonder when we are going to make the workplace environment conducive to safety??

    Thanks again,


  13. Tobey Llop April 24, 2012 at 9:30 pm - Reply

    I’m trying to find by gosh or by google how to report a major system-wide failure adversely affecting patient care (which the nurses shrug off) at a major hospital that gets huge amounts of taxpayer dollars. What is keeping avenues for complaint from being front and center on internet search engines? I’m looking for the Joint Commission for Accreditation and the NYS DOH for where to report and I find myself here. Patients are suffering and taxpayers are getting milked!

  14. Dr Anan December 4, 2012 at 8:58 pm - Reply

    As we are human, we do mistakes. incident reporting is the cornerstone of patient safey issues, but if we have to go deep in this issue we will find many barriers which recltuant doctors and nurse to report . one of these, i think that lack of awarness toward patient safety in hospital, while many fear of consequences of reporting ( loss of respect, blame culture, ,,,,)

  15. Robert Hunn October 28, 2013 at 11:22 pm - Reply

    Dr. Wachter, it is great to see you fully address the elephant in the bathroom. I know UCSF has done a phenomenal job of capturing informtion related to patient and staff incidents.

    But many many medical care organizations need to assess and spend more on prevention. I sincerely believe many healthcare administrators, not speaking of UCSF, fail to see the benefit of investing in accident or incident prevention. More research is needed but also is more understanding of the need to reduce many preventable incidents through investment in a better design.

Leave A Comment

About the Author: Bob Wachter

Robert M. Wachter, MD is Professor and Interim Chairman of the Department of Medicine at the University of California, San Francisco, where he holds the Lynne and Marc Benioff Endowed Chair in Hospital Medicine. He is also Chief of the Division of Hospital Medicine. He has published 250 articles and 6 books in the fields of quality, safety, and health policy. He coined the term hospitalist” in a 1996 New England Journal of Medicine article and is past-president of the Society of Hospital Medicine. He is generally considered the academic leader of the hospitalist movement, the fastest growing specialty in the history of modern medicine. He is also a national leader in the fields of patient safety and healthcare quality. He is editor of AHRQ WebM&M, a case-based patient safety journal on the Web, and AHRQ Patient Safety Network, the leading federal patient safety portal. Together, the sites receive nearly one million unique visits each year. He received one of the 2004 John M. Eisenberg Awards, the nation’s top honor in patient safety and quality. He has been selected as one of the 50 most influential physician-executives in the U.S. by Modern Healthcare magazine for the past eight years, the only academic physician to achieve this distinction; in 2015 he was #1 on the list. He is a former chair of the American Board of Internal Medicine, and has served on the healthcare advisory boards of several companies, including Google. His 2015 book, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age, was a New York Times science bestseller.


Related Posts

By  | September 10, 2018 |  1
I am going to teach you something you do not know. I am almost sure of it. Warm handoffs–a term you often hear within the confines of hospital walls when transferring a patient service to service or ward to ward. You do it in-house, but its unlikely you make the same connection when you discharge […]
By  | August 29, 2018 |  1
By:  Chirag R. Patel, DO, SFHM, FACP The Ohio State University Wexner Medical Center [email protected] Efficiency and timeliness in care are two of the Institute of Medicine’s six “Aims for Improvement”. Those are literally “million-dollar” topics for your institution, as millions are wasted annually due to throughput inefficiencies and increased length of stay. A […]
By  | August 21, 2018 |  2
By:  Richard Bottner, PA-C Hospitalist, Division of Hospital Medicine, Dell Seton Medical Center Assistant Clinical Professor, Internal Medicine, Dell Medical School at The University of Texas at Austin Alvin is a 42-year-old man who was never really given a chance. His parents both had severe alcohol use disorder. At age 12, his parents encouraged him to skip school […]