Category: Blog

For their own good: Involuntary commitment and patient’s rights in Washington D.C.

As we approach the holidays, we are encouraged to consider what we are grateful for. Walking around the WCL campus and seeing the posterboards, I am struck by how many people express open gratitude for therapy and psychiatric medication. Over the last week, I’ve seen notes that say: “I’m grateful for my meds” and “I am grateful for Celexa” (An antidepressant). In past years, I’ve seen: “I’m grateful for Prozac” and “I’m grateful for Lexapro.” These are also antidepressants. These notes indicate to me that there is not only less shame in disclosing mental health conditions and treatments, but more desire to speak openly about how we live with these conditions. Considering the stressful, highly competitive nature of law school, students on psychiatric medications should be aware their conditions can become unmanageable and require treatment in a psychiatric hospital ward. This raises the question: Are they aware of their rights once they have voluntarily accepted or have been forced into treatment? It is important to understand the basic rights of involuntarily committed patients because these patients are often vulnerable to misuse of their information and arrive at the hospital in a state not conducive to self-advocacy. Even in the midst of a mental health crisis, patients have a right to privacy and humane treatment.

What follows is a summary of rights of mental health patients in the D.C. metropolitan area. A person can be involuntarily committed for psychiatric treatment in Washington D.C. when a psychiatrist, qualified physician, or qualified psychologist has examined the person and has determined that the person has a mental illness which makes them a danger to themself or others, and that hospitalization is the least restrictive setting that meets the person’s needs. If the patient is a minor (under 18), their parent or legal guardian must be served notice of the patient’s admission to the hospital no more than 24 hours after admission. If the patient is an adult, the patient must authorize the Department to serve notice to their spouse, domestic partner, or legal guardian as long as the notice is consistent with the District of Columbia Mental Health Information Act of 1978 D.C. Code §7-1201-0.1. Patients cannot be detained for more than 48 hours, unless the chief officer of the department files a written petition with the court for emergency observation and diagnosis of the patient, in which case, the patient’s detention cannot exceed seven days from the time the order is entered. Patients are entitled to a complete record of their treatment as a patient. Not all mental health conditions qualify for involuntary commitment. A national survey of psychiatrists in the United States revealed that most believe simply having a mental illness that is diagnosable using the DSM should not be grounds for involuntary commitment, but being a danger to one’s self or others should be grounds for involuntary commitment. Most psychiatrists that were surveyed believed conditions that included a psychotic component should be grounds for involuntary commitment. Fifty-one percent of psychiatrists believed that having depression should be grounds for commitment.

We fear what we do not understand. Mental health commitment is not something to be feared. Psychiatric wards offer a place to rest and disconnect from the stresses of daily life until we are ready to face them again.  The goal of involuntary commitment is to prepare patients for a successful re-entry into society. Commitment may seem frightening at first, but it will likely be an experience you are grateful you had.

ChatGPT, MD? Artificial Intelligence in Healthcare 

ChatGPT is a natural language processing tool created by OpenAI, a research company specializing in artificial intelligence tools such as DALL-E 2, an AI art generator. Since its launch in November 2022, ChatGPT has become one of the fastest-growing apps in recent memory. By January 2023, the chatbot reached 100 million active users, with an average of 13 million visitors per day. Available for both desktop and mobile devices, ChatGPT employs a dialogue format similar to messaging apps where users can type in prompts and ask questions on a variety of topics. Numerous articles provide tips to users on the best prompts to ask ChatGPT, from drafting cover letters, solving complex math problems, and editing videos. Given its ability to form human-like responses, some users have turned to ChatGPT for medical advice. Patients can ask general questions on health conditions and use AI tools for summaries or resources to prepare for medical visits. However, the popularity of ChatGPT and its accelerated development has led industries to question how artificial intelligence may affect patient care in the near future, including privacy concerns and clinical care. As a result, it is worth asking how AI tools such as ChatGPT may be used to improve the standards of quality healthcare and the risks that are involved. 

Prior to ChatGPT, people frequently turned to the Internet to self-diagnose. Studies have shown that between 67.5% to 81.5% of American adults have searched for health information online. While this is not a new phenomenon, the conversational aspects of ChatGPT and a lack of regulation around artificial intelligence involves considering the ethical and moral implications of using AI tools. Generally, health experts have recommended against using ChatGPT for medical advice. However, doctors have reported that using AI tools may be useful for patients to learn more about certain conditions such as COVID, including the symptoms, causes, risk factors, and treatments as well as side effects of prescription drugs. Early studies have also indicated that AI models including ChatGPT could be used to screen patients for mental health conditions such as depression or anxiety and to determine treatment options. 

While ChatGPT has the potential to change the medical field in terms of diagnosis and treatment of various health conditions, it also raises new liability risks for health care providers. Physicians who rely on AI tools may be subject to greater liability for medical malpractice, as courts would likely find it unreasonable for health professionals to rely on AI-generated content. In addition, companies have noted that there is no way for physicians to use ChatGPT with protected health information (PHI) of patients while remaining compliant with HIPAA. For instance, if a physician chose to use ChatGPT to transcribe handwritten notes or recordings from a general appointment, they may potentially violate HIPAA regulations without knowing it.

Although the use of artificial intelligence in healthcare settings is fairly limited today, researchers have begun considering how AI systems can be built to improve the efficiency and effectiveness of medical services. This includes proposals for “human-centered AI” that employs a problem-solving approach to key issues in clinical care. One possible method is to train AI models by analyzing large amounts of data to look for certain health conditions in specific populations. In recent news, Stanford School of Medicine and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) announced the creation of RAISE-Health (Responsible AI for Safe and Equitable Health), an initiative to integrate AI into healthcare. This includes developing a platform for responsible use of AI systems by enhancing clinical services as well as educating patients and providers on best practices.   

As ChatGPT becomes an increasingly prominent aspect of medicine and other industries, it is important to consider how AI tools have the potential to streamline the work of healthcare systems while also cautioning physicians of the legal and ethical risks involved in its use.

Hospital Mergers: Anticompetitive Consolidations in the Landscape of American Healthcare

Between 1970 and 2019, the number of local hospitals that consolidated into a larger healthcare system increased from 10% to 67%. The healthcare industry is slowly monopolizing. Some believe that this merger-centric healthcare economy is good for patients, especially those in rural communities, where mergers are especially prevalent. One particular characteristic of rural acquisitions stands out: the healthcare organizations involved in these mergers come from out-of-market. A recent study found that 17% of rural hospitals merged with healthcare organizations outside of the local geographic market. These out-of-market mergers increase large healthcare corporations’ market share, lowering the amount of potential competition, and further monopolizing the landscape. Supporters of both in-market and out-of-market mergers argue that they play a vital role in preserving rural communities’ access to care. Small and financially vulnerable hospitals in rural areas may seek out a merger with a large healthcare organization to improve their financial outcomes or to increase the quality of their services to patients.

While supporters insist that mergers are a logical solution to getting patients better access to specialty care, skeptics worry about the downsides. Primarily, these mergers often shift the costs of care to the patients. One study found that after examining 366 mergers and acquisitions of hospitals occurring between 2007 and 2011, prices for patients increased by over 6%. Adding insult to injury, the same study found that prices at hospitals with a monopoly of patients were on average 12% higher than those in markets with four or more competing care centers.

How should we address the financial burden patients endure from hospital mergers? Well, this is where we look to anti-trust oversight and enforcement procedures. Anti-trust laws exist to protect consumers by preventing monopolies and encouraging competition to drive efficiency, improve quality, and lower prices for the everyday American. Three primary cornerstone federal antitrust laws collectively govern the regulation of competition: the Sherman Antitrust Act of 1890, the Clayton Antitrust Act of 1914, and the Federal Trade Commission Act of 1914. Additionally, most states possess parallel statutes that provide antitrust enforcement procedures within their jurisdiction. To address antitrust-related activities that occur in out-of-market merger scenarios, Congress has passed the Hart-Scott-Rodino Antitrust Improvement Act, which mandates that notice be sent to the Federal Trade Commission when a transaction exceeds $101 million. This threshold is concerning when looking through the lens of hospital mergers, where acquisitions rarely exceed the threshold for reporting. Today, eleven states lack processes for tracking and/or challenging healthcare provider transactions that go beyond the aforementioned federal antitrust laws. In contrast, only four states require notice of all transactions between healthcare entities.

State legislatures have a few options to remedy this issue, though one is particularly compelling: state policymakers can develop state statutes that prohibit anti-competitive clauses in health merger and acquisition contracts, thus preventing providers from using their enormous market power to strongarm patients into paying more. By taking this step, legislators can protect their states from mass consolidation of their healthcare markets and protect their constituent’s access to affordable healthcare.

Group Health Plans and the HIPAA Privacy Rule

            Under the administrative simplification provision of the Health Insurance Portability and Accountability Act (HIPAA), there are pertinent regulations that lead to compliance obligations for employer-sponsored group health plans. In accordance with 45 CFR § 160.103, the HIPAA Privacy Rule can only be applied to the groups that are listed and deemed to be covered entities. For purposes of the Privacy Rule, covered entity means any health plan, health care clearinghouses, and health care providers who transmit health information in electronic form in connection with a transaction.  Within the list of covered entities, health plans are defined as an individual or group plan that provides, or pays for the cost of medical care.  Further, a group health plan means an employee welfare benefit plan, including fully insured and self-insured plans, to the extent that the plan provides medical care.This provision includes items and services paid for as medical care to employees or their dependents directly or through insurance, reimbursement. The HIPAA Privacy Rule will also apply to a plan that has 50 or more participants, or is administered by an entity other than the employer that established and maintains the plan. Thus, the group health plan is deemed to be an entity that is entirely separate from the employer and plan sponsors that provide the requisite care or insurance to the employees. 

            While the usage of group health plans is clearly significant within the context of the HIPAA Privacy Rule, it is certainly helpful to distinguish between the types of plans that are frequented by employers. A fully insured health plan is commonly considered to be the traditional method of insuring employees that work within their companies. Within this framework, employers pay a fixed premium to a large insurance provider (Aetna, Kaiser Permanente, United Health) for their employees, and the provider covers the cost of the employee’s medical expenses. While fully insured plans are typically more expensive for the employer, the premium rates are annually fixed based on number of enrolled employees. The insurance provider will handle claims in accordance with the plan outline that is selected by the employer, and employees are responsible for reaching a deductible or copays depending on the types of medical services the plan covers. On the contrary, self-insured health plans are more flexible than fully insured plans because it enables the employer to select a plan that best meets the individualized medical needs of their employees. More specifically, this plan cuts out the preset framework from insurance providers, and employers are responsible for determining the costs of their own plan. Commonly, employers will use a form of stop-loss insurance in order to mitigate the cost if one of their employees is well beyond the coverage window.  While there are benefits and downsides to both types of health plans, the HIPAA Privacy Rule will apply to both in a variety of different circumstances. First off, if the company heath plan is administered by a third party it will not matter whether the plan is fully or self-insured and these parties will always be subjected to the Privacy Rule. However, if the health plan is truly self-administered (no third parties), has under 50 total participants, and does not handle protected health information (PHI), then the Privacy Rule will not apply.  Consequently, it is extremely uncommon for a fully insured plan to be subjected to the standards of the Privacy Rule because the relationship between the employer and insurance provider will attempt to eliminate PHI being transferred. Additionally, ERISA requires that if an individual is titled as the “plan administrator” but does not carry out daily functions of the plan then self-administration cannot possibly apply.

            Navigating HIPAA and the complex application to group health plans can be challenging for many businesses and organizations to conceptualize. Therefore, it is essential for employers to select the type of plan that best suits their employee’s needs, but they also need to remain cognizant of the restraints that both ERISA and the HIPAA Privacy Rule attach to group health plans.

FDA Approves Tandem’s Automated Insulin Delivery Technology For Toddlers: Another Step Toward Hybrid Closed Loop Systems in Type 1 Diabetes Management

On November 7, the FDA announced clearance for Tandem’s Control-IQ technology for use by children ages two and older. Control-IQ is a software that inputs glucose readings from Dexcom’s continuous glucose monitor into Tandem’s insulin pumps and, in junction with the user’s insulin-to-carb ratios and daily trends, automatically increases, decreases, and suspends insulin delivery. Previously, the FDA had only cleared children of ages six and older to use Tandem’s Control-IQ.

This advancement represents the latest step toward greater access to hybrid closed loop systems of type 1 diabetes care. Hybrid closed loop systems refer to those like Control-IQ – a device that monitors the user’s blood glucose levels and communicates with an insulin delivery device to adjust insulin dosage autonomously.

Although Control-IQ demonstrates a strong capacity to improve blood glucose control, many diabetes patients lack knowledge and access to the technology. In a recent survey of people with type 1 diabetes by Dexcom, 45% of respondents indicated that they did not know what a hybrid closed loop system was or how it could benefit them.

As the capacity of diabetes care technologies increases – taking the wheel as the decision-maker over the patient – many patients may wonder what would happen if their devices were to make an error and seriously harm them. As of December 2022, the FDA received over 500,000 complaints about the Dexcom G6, Dexcom’s continuous glucose monitoring system (CGM). In late 2019, a woman brought wrongful death action against Dexcom, as her husband’s CGM failed to alert him of his hypoglycemia in time for him to act. This raises concerns for patients with limited capacity to take control and make decisions of their own, including young children and toddlers.

Nonetheless, the FDA’s updated clearance only came after a study released earlier this year showing that hybrid closed loop systems improved blood glucose control for children between the ages of two and five, especially overnight. It also follows recent clearance of Tandem’s Mobi – a smaller, more discrete insulin pump that the user controls from their phone. While Mobi is no more adept at automated insulin delivery than the normal models, intelligence is just one consideration in pursuit of seamlessly integrating diabetes care into the everyday lives of patients. Speaking practically, the aim to eliminate manual blood sugar checks exists because constant self-care is burdensome and exhausting. Likewise, so is wearing equipment all hours of every day. From the perspective of the patient’s daily experience, comfort and algorithmic advancement go hand-in-hand.

FDA clearance is distinct from FDA approval. While approval means that a product or treatment’s benefits outweigh its risks, clearance means that a product or treatment is substantially equivalent to a previously approved version.

That said, even if Tandem’s new products are to earn approval, FDA approval cannot entirely shield a device manufacturer from liability. FDA approval represents a floor for safety standards and is not enough to defeat a claim against a medical device or drug manufacturer on its own. In practical terms, the FDA’s standard gives diabetes patients and their caretakers both reason for ease and concern. On the one hand, one of the major appeals of closed loop technologies is that patients are free to spend less time manually checking their blood sugar and delivering insulin, especially overnight. On the other, ceding control to an algorithm may feel uncomfortable for some patients, especially those that have taken responsibility for their own care for a long time or are otherwise wary of the rapidly advancing technology.

Virginia: An Omen of Things to Come for Anti-Abortion Candidates

On Tuesday, November 7, 2023, voters in Virginia demonstrated the importance of abortion (and thus a pregnant person’s right to choose) when Virginia Democrats retook full control of Virginia’s General Assembly. Prior to the election, Democrats were the majority in the Senate (22-18), while Republicans were the majority in the House of Delegates (52-48). All seats were on the ballot on November 7, and the legislative races for the seats were dependent on candidates’ stance on abortion. For instance, Democratic ads featured abortion more than any other issue, while Republican ads focused on the economy, education, public safety, and parental rights.

As Democrats retook full control of the General Assembly, Republicans will be unable to implement new abortion restrictions, specifically Governor Youngkin’s proposed abortion limit. Prior to the election, Governor Youngkin pledged to sign his proposed abortion limit if Republicans maintained control in the Senate and gained control of the House. Youngkin’s proposed plan is to alter Virginia’s current abortion limit to a 15-week limit after pregnancy except for rape, incest, and saving the pregnant person’s life. Virginia’s current abortion policy is that abortion is legal until viability, defined by the Supreme Court of the United States as “the capacity for meaningful life outside the womb, albeit with artificial aid… [not just] momentary survival.” Throughout the election, Democrats have promised that an abortion ban legislation will not be sent to Governor Youngkin’s desk for the remainder of his term in office. They have gone as far as to state that any legislation to limit abortions is “guaranteed to fail in the General Assembly next year.”

Two big organizations, Think Big America and the ACLU of Virginia, donated a substantial amount of money to Democratic candidates. During the week before the election, Think Big America, a nonprofit group affiliated with Illinois Governor J.B. Pritzker, donated $250,000 to Democrat candidates in the election. The donation was to be allocated as follows: $25,000 each to 4 Democrats running in battleground Senate districts and $150,000 to the Democratic Party of Virginia. The ACLU of Virginia donated just over $1 million to be directed toward direct mail, digital ads, and volunteer outreach to highlight candidates’ positions on abortion in the competitive 5 Senate districts and 6 House districts.

While the ACLU of Virginia’s financial investment was unprecedented in its history for this year’s election cycle, they believed it was necessary as the implications of November’s legislative races would have an effect on abortion access and policy decisions in Virginia. November 7’s election outcome determined whether abortion and reproductive rights will continue to be safe for Virginians, as well as for those from the South who are traveling to Virginia to access abortion care.

Virginia’s election results reverberated elsewhere. On November 7, Ohio voters approved a constitutional amendment ensuring access to abortion and other forms of reproductive health care. The amendment included some of the most protective language for abortion access of any statewide ballot initiative since the Supreme Court ruled on Dobbs.

As demonstrated, there are consequences at the ballot box to candidates’ stance on abortion. Overall, about two-thirds of Americans believe abortion should be legal until 24 weeks, a quarter believe abortion should always be legal, and about 1 in 10 believe abortion should always be illegal. As the 2024 presidential race is underway, Republican candidates differ on abortion, with some arguing for a national 6- or 15-week ban and others arguing that the decision should be left to the states.

Regardless of the Republican party’s course of action regarding abortion bans, Senator Kevin Cramer (R-N.D.) stated, “[t]he people aren’t with us,” acknowledging that the country is not on board with extreme limits on abortion. Despite Republican candidates wavering on their abortion stance, one thing is for certain: abortion will remain a hot topic in the 2024 presidential and congressional races.