Professional Perspectives: Interview with a Personal Injury Trial Attorney

In Professional Perspectives, we invite professionals of all stripes to tell us about ethics as it is relevant to their professional life. Today we present an interview with Mario Palermo, a personal injury trial attorney who practices law in Chicago. Mario is passionate about ethical practice, and has also written a piece on advice he has for attorneys who wish to avoid the common ethical pitfalls of the trade, which you can find below the interview.

Tell us a bit about your profession. What do you do?

I am a personal injury trial attorney. I help injured people and their families.

Do you find ethical reflection and decision-making to be a significant part of your work?

Yes. My job is to put the interests of my injured clients above all others, including my own. I am always mindful of this mantra. I have to balance my role as my client’s representative with the rules of ethics and evidence. I have to zealously advocate for my client while adhering to the rules.

The hardest challenge is when the best interests of my client are not consistent with my client’s desires. This comes up when deciding on how or whether to use a piece of evidence or in settlement negotiations.

What types of ethical situations do you encounter in your work?

I have to mindful of conflicts of interest. For example, if a family comes to me for help after a car crash, it may not be appropriate to represent the driver and the passengers if the driver may be found to be partially responsible for causing the crash. I have to make decisions at trial regarding how to use evidence or how to frame an argument. If you go too far, it can result in a mistrial.

Can you describe a case that was, ethically, particularly interesting or difficult for you?

A took on a case where a mother and her children were seriously injured in a car wreck. I liked them all. They are a beautiful family. I agreed to represent them all because it appeared that the defendant, a young, inexperienced driver, was the sole cause of the violent wreck. The defendant turned in front of the family at an intersection where the family had the right of way. The defendant admitted to the police at the scene that she did not know what color the traffic light was before the collision. Meanwhile, the mother was 100% sure she had a green light.

Unfortunately, the defendant changed her story when I questioned her under oath. Her new story was that she was sure the mother ran a red light and that she had a green arrow. I did not believe her. Despite the obvious issues with the defendant’s credibility, I was placed in a position where I had to decide whether to represent the children OR the mother. Why? Because the children, as passengers, could not be to blame and it could diminish the children’s recovery if the jury somehow believed the defendant’s new story.

This was difficult because I did not want to disappoint or divide the family who trusted me and wanted me to represent all of them. I decided it was best to continue to represent the children and withdraw as the mother’s attorney. What’s worse is that not only could I no longer ethically represent the mother, but I was duty-bound to name the mother as a defendant as well. I did not want to do this. I risked losing the whole family as clients and was giving up significant legal fees because the mother was seriously injured. It was not pleasant explaining to the family what I had to do but it was the right thing to do. In the end, it worked out great for all concerned.

What are some significant factors in your ethical reflections and decisions in professional life, and why?

Ethical codes are the starting point and compass. They must be adhered to. They often provide guidance, even when a situation is not directly governed by them. However, the codes cannot cover every situation. When there is truly a grey area, I rely on open communication and full disclosure to the client. The best course is to let the client decide after being fully and fairly informed. I work hard to earn my client’s trust. In the end, if there is a difficult decision, I advise them what I would do if I was representing my mother in a similar situation and I explain why.

How do you go about making ethical judgments and decisions in your professional life?

I start with the rules. Mantra one is that I place my client’s interests above all others, including my own. Mantra two is I explain to my client what I would do if I was advising my mother in a similar situation. If my client insists on steering into disaster, I patiently but persistently educate them. There have only been a couple of occasions in 21 years of practice where I had to withdraw from representing someone that insisted on steering into an iceberg.


A big thanks to Mario for contributing to Professional Perspectives! Would you like to contribute? We want to hear from all kinds of professionals. Send me an email at and mark it with Professional Perspectives if you want to contribute.

Below you can find Mario’s list of Ethical pitfalls for personal injury attorneys:

5 Ethical Pitfalls Attorneys Must Avoid in Personal Injury Cases

By Mario Palermo

Attorneys play a crucial role in society as they are responsible for upholding the law and protecting individuals’ rights against abuse and crime. The profession’s combined reputation is critical to the trust it inspires amongst the general public. In other words, if lawyers do not adhere to and promote the ethics and principles of fairness and equality, the public’s confidence in law will be undermined, hindering their access to justice. Consequently, as the guardians of law, attorneys are expected to practice certain professional ethics such as placing the interests of their clients above their own and striving to obtain respect for the court of law.

An attorney representing personal injury cases often faces several pitfalls that can lead to unethical conduct. In this era of suspicion against personal injury lawyers, even the most straightforward personal injury case can trigger a variety of ethical dilemmas. Furthermore, the motives of the individuals making the personal injury claims are not always clear. On several other occasions, owing to poor legal outcomes, lack of trust, and increased frustration, clients resort to filing disciplinary complaints or suing the attorneys and the law firms for unprofessional conduct.

Here are five commonly-encountered ethical pitfalls that attorneys must avoid in order to uphold the dignity of the judicial office and build a trustworthy relationship with their clients.

1. Lack of Communication

Poor communication with clients can cause a serious rift in a personal injury case, making it challenging for the attorney to get the client’s support and information required with regards to the incident.

A personal injury attorney is required to listen to his/her clients’ grievances, address their concerns, and ensure that they have had a realistic understanding of what to expect from the case.

The rule 1.3 of the D.C. Rules of Professional Conduct states that –

  • (i) a lawyer shall represent his/her client zealously and diligently within the bounds of the law and
  • (ii) a lawyer shall act with reasonable promptness in representing his/her client

In order to be ethically compliant with the professional ethics, a lawyer must make an effort to effectively communicate with the client, analyze the legal issue, and inform the client regarding the changing laws and the prevailing circumstances.

2. Conflict of Interest and Lack of Confidentiality

Conflict-of-interest situations often raise complicated ethical dilemmas. For instance, it is against the professional ethics of an attorney to represent both parties in the same or related personal injury litigation.

The lawyer-client relationship is based on trust and confidence. Hence, it is crucial to run conflict checks early in order to determine the general nature of the personal injury case and the names of the parties involved. Conflict checks ensure that a lawyer’s commitment to his/her client isn’t disturbed by his/her commitment to another party.

Similarly, it is unethical for a lawyer to share his/her client’s confidential information even when the case has been resolved. The rule states that ‘a lawyer shall not reveal information relating to the representation of a client unless the client gives informed consent.’ Therefore, lawyers are required to fulfill a duty of confidentiality towards their clients, encompassing all the aspects of the representation.

3. Placing a Monetary Figure without Thorough Case Examination

It is an extremely poor ethical conduct for a lawyer to guarantee the client of the legal outcome of the court proceedings. Moreover, an ethical personal injury lawyer will never tell a client what his/her case is worth without sifting through the medical records, the case papers, the insurance documents, and other evidence pertaining to the case.

4. Failure to Correct False Testimony or Evidence

One of the trickiest situations faced by attorneys is when the client hides facts or gives false testimony or provides false evidence. A lawyer has a duty to prevent the court from being misled by false statements and evidence.

When faced with such a situation, an attorney should talk to his/her client and ask him/her to retract the false evidence or testimony in the presence of the court. In case the client refuses to comply, the lawyer may withdraw from the case after informing the client. However, the lawyer’s duty of confidentiality still remains after withdrawing from the case.

5. Indulging in Unethical Advertising

Advertising about the services of a law firm or an attorney has constitutional protection under the Rules of Professional Conduct provided the statements and claims in the advertisements are ‘truthful and not misleading.’ For instance, statements such as ‘We win 99 percent of personal injury claims in Chicago’ or ‘We are Chicago’s most preferred personal injury attorneys’ must be avoided unless these statements are absolutely true.

With the advent of new technology and new ways of communication, namely the social media and mobile marketing it is crucial for lawyers to be aware of the legal marketing ethics of each state. A few state bar associations, namely New York and Florida have specific social media guidelines for lawyers and require them to adhere to the states’ guidelines as well as the federal online marketing laws under the Federal Trade Commission (FTC).

Since practicing law is associated with a high level of social responsibility and maintaining the dignity of the legal profession, there are certain duties, codes of rules, and principles of behavior that a lawyer is expected to adhere to. Attorneys, especially those who are new to practicing law, often find themselves in situations that are at with these.

A personal injury attorney owes it to the law profession, the society, and his/her clients to uphold the honor and integrity of the profession. The above-mentioned points can be of help in understanding the most common ethical pitfalls in personal injury cases.


Troubleshooting Empowerment

What is empowerment?

There is no consensus definition of the concept of empowerment, but WHO’s definition appears to be an appropriate point of departure: “Empowerment is the process of increasing the capacity of individuals or groups to make choices and to transform those choices into desired actions and outcomes,” (WHO 2006: 17).

In health care, empowerment is usually viewed as a social process or strategy for achieving control over factors and decisions affecting one’s health (Gibson 1991), and of enhancing patients’ autonomy and capacity to make informed decisions (Kapp 1985).

Empowerment is a relational concept (Gibson 1991; Tveiten 2007), where one part helps or facilitate the ‘empowering’ of the other part. Further, an empowerment process entails a transfer of control and power from the nurse to the patient through dialogue, for instance through strategies such as motivation training, guidance, coaching, teaching, and shared-decision making. Hence, a central principle in empowerment is to acknowledge the patient as the expert on his own situation (Tveiten 2007).

Despite these intuitively positive associations, empowerment in practice presents some ethical questions that have yet to be thoroughly explored. For instance, little work has been done on whether the use of empowerment strategies in nurse-patient relationships is compatible with accepted standards for autonomous action. In what follows, I will sketch out some of the questions that crop up when considering empowerment within the context of nursing care.

Empowerment and autonomy

The relation between empowerment, autonomy, and consent in the nurse-patient relationship is interesting for several reasons. For instance, very often, being in need of nursing care means being in a very vulnerable position. Further, an increasing number of patients in need of nursing care have dementia or other forms of cognitive impairments, something that challenges their capacity of consent in the first place, as well as their capacity to participate in empowerment processes aimed at enhancing their consent. In such situations, the exercise of autonomy is dependent on the existence of caring and trusting relationships (Lôhmus 2015: 11).

To be consistent with the emphasis on acknowledging the patient as an expert on his own situation, we should respect that some people do not want to take control over their own lives. We should also respect that some patients do not want to participate in empowerment processes aimed at enhancing their consent, and we should respect the fully informed patient who still doesn’t want to make a decision, but instead wishes to be dependent on health professionals (Kapp 1985).

Empowerment and capacity to consent

According to WHO’s definition of empowerment, the outcome of a successful empowerment process is an individual who possesses increased capacity to make choices as well as increased ability to transform these choices into desirable actions and outcomes. Now, whereas philosophical literature is dealing with informed consent and decisional capacity at a conceptual level, it is not clear what increased decisional capacity amounts to in practice. Does it, for instance, imply that a patient who lacks decisional capacity in a certain situation, may gain such capacity through an empowerment process?

One further problem then relates to the fact that it will be unclear whether the outcome is a feeling of being empowered or actually being empowered to make own decisions (Kieffer 1983), and how these distinctions relate to autonomy and valid consent. This point is important since the choice made, according to WHO’s definition, is to be transformed into a desirable action and outcome. An additional question then is: desirable according to whom? Health care authorities, nurses, and the patient in question may have different opinions concerning what a desirable outcome of a situation should be. An empowerment process may, therefore, result in a choice and action that may conflict with what is professionally desirable and recommendable.

A third difficulty relates to Kapp’s (1985) point that decision-making power must be accepted voluntarily. This would imply that neither empowerment nor autonomy can be forced upon someone. But then we must ask if empowerment could be considered an intervention that presupposes voluntary participation in the first place. The central problem here is the extent to which participation in an empowerment process that is intended to increase capacity to consent, presupposes autonomy and consent. If that is the case, empowerment appears to be an awkward bundle to carry:

If empowerment is compatible with an acceptable standard of autonomy, would this imply that becoming autonomous presupposes already being autonomous? If it is not the case, however, that participation in an empowerment process requires being autonomous, would this imply that a patient legitimately can be forced to participate in an empowerment process aimed at enhancing this patient’s autonomy? Or does empowerment represent a form of manipulation or coercion?

Empowerment and paternalism

This leads to my hypothesis that empowerment may have paternalistic undertones. For instance, adequate information is essential to empowerment (Kapp 1989), and so it is for a patient to fully possess understanding of the action and its consequences, which is one of the standard criteria of autonomous action. Besides, the idea that concern for patients’ own experiences, comprehension of needs, values, and desires constitutes a hallmark of empowerment, and is also of fundamental importance in respecting a patient’s autonomy.

But it is unclear how much weight the patient’s own values and desires should have in cases where these conflict with the best-up-to-date research findings, and how the different kinds of knowledge should be balanced in an empowerment process. For example, evidence-based-practice has become highly influential in nursing care, and, according to evidence-based practice, health care should be based on the best up-to-date research findings (Gupta 2014).

Central questions then are: if too much emphasis is placed on the patient’s own values and desires in an empowerment process at the expense of professional knowledge, do we run the risk of diminished professional responsibility? Or is the asymmetry in power in favour of the nurse reinforced by the considerable emphasis such factors as the best-up-to-date research findings?

In either case, there is a need for further discussions on the compatibility between the use of empowerment strategies in nurse-patient relationships to enhance patients’ use of knowledge to make informed decisions and accepted standards for autonomous action. Hopefully, the use of empowerment strategies could be considered a justified trust building intervention aimed at preventing coercive actions or (more) paternalistic interventions.


Gibson, C. 1991. “A concept analysis of empowerment”. Journal of Advanced Nursing. 16 (3). 354361.

Gupta, M. 2014. Is evidence-based psychiatry ethical?. Oxford University Press.

Kapp, M. 1989. “Medical Empowerment of the Elderly”. The Hastings Center Report. 19 (4). 57.

Kieffer, C. 1983. “Citizen empowerment: a developmental perspective”. Prevention in Human Services. 3 (23). 936.

Lôhmus, K. 2015. Caring Autonomy. Cambridge University Press.

Tveiten, S. 2007. Den vet best hvor skoen trykker: om veiledning i empowermentprosessen. Fagbokforlaget.

WHO. 2006. “What is the evidence of effectiveness of empowerment to improve health?” WHO.


Marita Nordhaug is Associate Professor at Oslo Metropolitan University and currently part of a research group on empowerment.

Photo: Sonja Balci

Contributions wanted: Professional perspectives

A new series!

Professional ethics is launching a new series of posts called Professional perspectives. The point of this series is to get a glimpse of professional ethics as it looks through the eyes of practitioners. As a complement to our regular philosophical posts, Professional perspectives will show the thoughts and reflections that regular doctors, lawyers, engineers, etc. have about professional ethics.

But to make this series live up to its potential, I need your contributions! If you are a practicing professional (doctor, lawyer, engineer, nurse, teacher, researcher, etc.) and you are reading this blog, then you are probably a perfect candidate for contributing to this series.

How do I contribute?

The simplest way of doing this is to send me a mail at and tell me who you are, what your profession is and that you want to contribute. I will then send you a document with some interview questions that you can answer and send back to me. This need not take more than 30 minutes of your time, so why not do it today?

All best,
Ainar Miyata-Sturm

Health inequality and professional ethics

Health inequality is pervasive

Health inequalities exist both across countries and within national borders. This is not so only in low- and middle-income countries, where healthcare systems are not well developed, and a large part of the population lives in poverty. Health inequalities persist in high-income countries with universal healthcare systems as well, such as Norway. In Oslo, for example, from 2000 to 2014, the expected life expectancy varied by almost nine years between the Vestre Aker and Sagene (Norwegian Institute of Public Health).

Inequality in health can take the form of all kinds of differences in health status, such as variations in life expectancy, self-reported health, and the distribution of diseases and disability.

Tackling health inequalities is an important political goal; however, few have discussed the overall implications this has for the professional ethics of central agents.

Are health inequalities unjust?

In standard usage, “inequality” is a merely descriptive term whereas “inequity” is normative. While any situation where there is a difference in health status can be described as a situation of health inequality, only a situation where there this difference is unfair or unjust is a situation of health inequity.

Since not all kinds of differences in health status are necessarily problematic, when is a health inequality also a health inequity?

There is no single response to this question: different definitions and theoretical approaches to health equity provide different answers. Whitehead (1991: 219), for example, has defined inequity as “differences which are unnecessary and avoidable, but in addition, are also considered unfair and unjust”. In this definition, it remains unclear what “unfair or unjust” means. Braveman (2006: 181) defines unfair health inequalities as “differences in health (or in important influences on health) that are systematically associated with being socially disadvantaged (e.g., being poor, a member of a disadvantaged racial/ethnic group, or female), putting those in disadvantaged groups at further disadvantage.”

Others are concerned with the unfairness of inequalities that systematically vary across levels of socioeconomic status, i.e., the so-called social gradient. In important work done for the World Health Organization (WHO), the British epidemiologist Marmot states that “[i]f systematic differences in health for different groups of people are avoidable by reasonable action, their existence is, quite simply, unfair” (Marmot et al. 2008: 1661). Health inequalities associated with controllable social health determinants that in principle can be avoided such as access to a safe environment, clean water, education, employment, income, are considered unfair.

Health inequalities among individuals that cannot be explained in terms of social disadvantage might also be unfair. Researchers call for more exploration and debate of whether such differences should be accounted for on ethical premises and claimed to be inequitable (Asada et al. 2015).

The relative significance of health

Regardless of how health inequity is conceptualized, the underlying claims of injustice or unfairness imply a call for eradicating or reducing these observed inequalities. When we take a broad perspective on justice, we must compare the resources put into reducing health inequality with those given to other distributional challenges. This means that well-justified policies for addressing health inequity must be able to explain the relative moral significance of ensuring health equity and ensuring equity of other societal goods.

Is health so important that concerns for health equity trump other equity concerns? A consequence of a view that puts health equity first could be that we must give up on the supposedly fair principle of relating income to efforts since income is among the determinants of health status.

The philosophical debate does not provide many—if any—clear-cut, practical answers to how we should prioritize reducing health inequity in the broader context of social justice in general. Arguably, this is the task of elected politicians making real-world resource allocations in complex contexts. Having said that, implementing governing strategies to put—and keep—the challenge of health inequity on the political agenda is probably central to fighting this inequity. The application of legal regulations can be helpful in this regard.

Health inequalities and the law

In Norway, health inequalities are addressed in the law. The aim of the Public Health Act is to contribute to societal development by promoting public health, which explicitly includes reducing social health inequalities. That is, we have a legally regulated aim of reducing one particular kind of health inequality—social health inequality.

Again, there is no consensual answer to how this should be done. Theoretical responses to the question “Why is social inequality in health unfair?” do in fact justify a variety of interpreted political aims and strategies (Wester 2018).

Another legal regulation related to health inequalities is the Patients’ and Users’ Rights Act. The act’s objective is to ensure “equal access to healthcare services of good quality” to all members of the state. Inequality in access to care can result in inequalities in health. It is tempting to assume that, in countries with so-called universal access to healthcare, the healthcare system itself does not have any impact on health inequality. However, this should not be taken for granted. Research suggests for example that people who are at a socioeconomic advantage have more access to specialist care (Vikum, Krokstad, and Westin 2012), and in 2016 quite a high percentage of survey respondents (8%) reported an inability, or great difficulties, to pay the out-of-pocket share of healthcare (Skudal et al. 2016).

A sensible way to operationalize “equal access” is to identify the barriers that limit people’s access. Levesque and colleagues offer a helpful conceptualization of “access” in terms of “… opportunity to identify healthcare needs, to seek healthcare services, to reach, to obtain or use healthcare services, and to actually have a need for services fulfilled” (Levesque, Harris, and Russell 2013).  Barriers can occur along all these dimensions of access.

All barriers to access are not necessarily inequitable. Politicians need to clarify what barriers they consider unfair and tackle these through interventions (Bærøe et al. 2018). To prevent social inequalities in health, it is clear, however, that access to healthcare systems must not necessitate what is lacked by those who are worse off socioeconomically (Bærøe & Bringedal 2011). To tackle social health inequalities, politicians must then be professionally responsible for ensuring evidence of unacceptable barriers to access and for basing decisions about the healthcare system and its services on such information.

Within this context of legal regulations and differing views on the equity of health inequalities and access, interesting issues arise about the professional ethics and integrity of central agents.

Health inequity and professional ethics: politicians and health bureaucrats

Politicians and health bureaucrats monitor the development of social inequality, and they design and implement public health strategies. This process can be pursued with more or less theoretical nuance. On one extreme, the whole process is theory-driven, coherent and systematic both in data collection and policy design. On the other, the process is largely ad hoc. While both approaches may lead to reduced health inequalities, the latter threatens the overall fairness of public health interventions in several ways.

An ad hoc approach opens up for mismatches between judgments about inequity and the data that is made available. Also, approaching inequalities in an unsystematic manner can support biases in the reduction of inequalities: if not carefully monitored, inequalities experienced by those who are least able to voice their claims (e.g., practical difficulties in accessing the healthcare system) may not get sufficient attention to make it to the political agenda. Another problem is that, if there is no clearly stated and systematically applied socio-political justification for health equity judgments and interventions, there is no democratic control of these policies and resource allocations either.

If these arguments are sound, this has implications for the professional ethics of politicians and health bureaucrats. Reasonable professional ethics seems to require decision-makers to familiarise themselves with health equity theories and to make sure they transparently justify both organized data collection and public health policies accordingly.

Health inequity and professional ethics: researchers

Policy interventions to reduce health inequalities will likely not be effective if they are not based on adequate evidence. Adequate evidence depends on coordinated, multi-disciplinary approaches of conceptualization, measurements, and reports. But measuring health inequality is not a neutral enterprise in itself (Harper et al. 2010).

Measurements can be chosen from among different approaches that sometimes provide apparently conflicting conclusions about inequality (Harper et al. 2010). Measuring methods also differ with respect to how sensitive they are towards particular ethical concerns, like for instance concerns about the worse off within a population. Choosing one measuring strategy at the expense of others indicates the normative judgment that this is the best way to capture the potential health inequality in question.

As Harper and colleagues state, researchers should strive for transparency concerning the implicit normative judgments of presented measures and implementation of multiple strategies to expose the various dimensions of inequality. This is necessary to do away with unfair policies that are based on limited and biased presentations of health status variations. We can also add that exposing the normative character of seemingly neutral measurements that feed into political decision-making ought to be a professional ethical duty of researchers within this field.

Health inequity and professional ethics: healthcare personnel

Healthcare personnel can play a crucial role in preventing the healthcare system from exacerbating social health inequalities. Overall, we can see them as having a duty to report any observed inequity and to work against any barriers to inequitable access within their control.

One such barrier would be healthcare personnel who discriminate between patients of different socioeconomic status. The duty to not give anyone unjustified priority based on status is listed among the ethical rules for physicians (Parsa-Parsi 2017).

Other barriers to adequate treatment would be lack of recognition of ‘the important structural and social factors shaping the health experiences of patients’ (Furler & Palmer 2010) and healthcare personnel who disregard the potential impact of socioeconomic factors for the success of clinical treatment. Socioeconomic aspects of people lives can adversely impact their ability to benefit from prescribed interventions or recommendations (Bærøe & Bringedal 2011; Puschel et al. 2017). For example, inability to pay for medication can leave infections untreated, and lack of education may prevent a sufficient understanding of doctors’ explanations and health recommendations.

It has been proposed that national and international professional ethical guidelines should mention the positive duty of taking socioeconomic factors into account to improve the effect of clinical work as well (Bærøe & Bringedal 25.05.2010; Bringedal et al. 2011).

Overall, healthcare personnel should be educated in the normative underpinnings of health inequalities. Furthermore, the ethical curriculum should include learning about barriers to access healthcare that are within the power of healthcare personnel to reduce, and training in recognizing when inequitable access occurs.

A way forward

Approaches to equity judgments about unfair inequalities in health should be related to data collection and data presentation, which make up the basis for political interventions to reduce inequalities in health. Coordinating and fostering the professional ethics of politicians, health bureaucrats, researchers and healthcare personnel in this matter may effectively help tackle social health inequalities.



Asada, Y., Hurley, J., Norheim, O. F., & Johri, M. 2015. “Unexplained health inequality – is it unfair?”. International Journal for Equity in Health. 14 (1). 11.

Bærøe, K., & Bringedal, B. 2010. “Legene bør ta sosiale hensyn”. Bergens Tidene. 25.05.2010.

––––––. 2011. “Just health: on the conditions for acceptable and unacceptable priority settings with respect to patients’ socioeconomic status”. Journal of medical ethics. 37 (9). 526–29.

Bærøe, K., Kaur, J., & Radhakrishnan, K. Forthcoming 2018. “Lik tilgang og likeverdige tjenester: hvordan styrke realiseringen av disse rettslige formålene?” In Prioritering, styring og likebehandling: Utfordringer i norsk helsetjeneste. Edited by H. S.Aasen, B. Bringedal, K. Bærøe, & A.M. Magnussen. Cappelen Damm Akademiske.

Braveman, P. 2006. “Health Disparities and Health Equity: Concepts and Measurement”. Annual Review of Public Health. 27. 167–194.

Bringedal, B., Bærøe, K., & Feiring, E. 2011. “Social Disparities in Health and the Physician’s Role: A Call for Clarifying the Professional Ethical Code.” World Medical Journal. 57.

Furler J. S. & Palmer V. J. 2010. “The ethics of everyday practice in primary medical care: responding to social health inequities”. Philosophy, Ethics, and Humanities in Medicine. 5 (5). 6.

Harper, S. A. M., King, N. B., Meersman, S. C., Reichman, M. E., Breen, N., & Lynch, J. 2010. “Implicit Value Judgments in the Measurement of Health Inequalities.” Milbank Quarterly. 88 (1). 4–29.

Hay, S. I. et al. 2016. “Global, regional, and national disability-adjusted life-years (DALYs) for 333 diseases and injuries and healthy life expectancy (HALE) for 195 countries and territories, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016”. The Lancet. 390 (10100). 1260–344.

Levesque, J. F., Harris, M. F., & Russell, G. 2013. “Patient-centred access to health care: conceptualising access at the interface of health systems and populations.” International Journal for Equity in Health. 12 (1). 1–9.

Lov om folkehelsearbeid (folkehelseloven). 2011.

Lov om pasient- og brukerrettigheter (pasient- og brukerrettighetsloven). 1999. om pasient

Marmot, M., Friel, S., Bell, R., Houweling, T. A. J., & Taylor, S. 2008. “Closing the gap in a generation: health equity through action on the social determinants of health.” The Lancet. 372 (9650). 1661–69.

Norwegian Institute of Public Health. 2016. “Social inequalities in health.”

Parsa-Parsi, R. 2017. “The revised declaration of geneva: A modern-day physician’s pledge.” Jama. 318 (20). 1971–72.

Puschel, K., Furlan, E., & Dekkers, W. 2017. “Social Health Disparities in Clinical Care: A New Approach to Medical Fairness.” Public Health Ethics. 10 (1). 78–85.

Skudal, K. E., Sjetne, I. S., Bjertnæs, Ø. A., Lindahl, A. K., & Nylenna, M. 2016. “Commonwealth Funds undersøkelse av helsetjenestesystemet i elleve land: Norske resultater i 2016 og utvikling over tid”. Folkehelseinstituttet.

Vikum, E., Krokstad, S., & Westin, S. 2012. “Socioeconomic inequalities in health care utilization in Norway: the population-based HUNT3 survey”. International Journal for Equity in Health. 11 (1). 48.

Wester, G. Forthcoming 2018. “Hvorfor er sosiale ulikheter i helse urettferdige?.” In Prioritering, styring og likebehandling: Utfordringer i norsk helsetjeneste. Edited by Aasen, H. S., Bringedal, B., Bærøe, K., and Magnussen, A. M. Cappelen Damm Akademiske.

Whitehead, M. 1991. “The concepts and principles of equity and health”. Health Promotion International. 6 (3). 217–228.


Kristine Bærøe is Assiociate Professor at the Department of Global Public Health and Primary Care at the University of Bergen.

Photo: Private.

Distrust in Research Is Making It More Reliable

Knowledge-based public policy

Is the climate getting warmer? What kinds of food should we consume? What constitutes good healthcare? These questions are of high importance and have high political relevance. To answer such questions, one needs knowledge. Due to the complexity of the topics, producing this knowledge requires a systematic and collaborative effort.

Fortunately, we have such a system: we call it research.

Unfortunately, the public’s trust in research is not high enough to systematically ensure knowledge based public policy. Measures to change this, and to secure trust in research, are currently being enacted throughout the world, and these measures are changing the way researchers work.

Public trust in research

According to Pew Research Center, 76% of Americans have either a fair amount or a great deal of trust in researchers when it comes to whether researchers work towards the public interest ( Even though the number could be higher, 76% is not catastrophic. (In comparison, the news media is at 38% and elected officials as low as 27%.)

However, when the respondents are asked about specific topics, like climate change and the safety of genetically modified food, trust declines. About a third of the respondents in the same survey believe that researchers have a poor understanding of these topics. Also, around 10% of the respondents believed that researchers do not understand the effects of the MMR-vaccine well.

For a researcher, these are high numbers, and probably the reason why politicians like Donald Trump can get away with supporting anti-vaccine activists, and accusing researchers of fabricating the climate crisis. Getting a third of the population to vote for you is usually sufficient to win the American presidential election.*

Why do people distrust research?

There are many possible explanations for why people distrust research. Most of the time, it is more important for people to be accepted by their social group than it is to be independent, rational and research-based (Greene 2009). If most of your friends believe that genetically enhanced food is dangerous, and discuss this topic often, you might have to pay a high price for disagreeing. Your friendships could deteriorate, and you might feel excluded when your friends discuss this topic.

Another issue is the fact that researchers sometimes cut corners and cheat. The research community is increasingly realizing that it is important to take an active role in securing public trust. As researchers, it is vital that we make sure our research is trustworthy.

Before the 80s, accusations of research misconduct were mostly unheard of. Since then, there has been an increasing consciousness around the fact that researchers sometimes cheat. They falsify and fabricate data, and they steal each other’s work. They also engage in all sorts of questionable grey area research practices. According to one meta-analysis, 2% of researchers admit to having committed serious misconduct in their research, while 33.7% admit to other questionable research practices (Fanelli 2009). As this behavior is self-reported, the numbers are probably even higher.

Ensuring trustworthiness

The realization that researchers are not as virtuous as once believed has led to the introduction of measures intended to secure the integrity of research. Supra-national institutions are enacting codes of conduct and principles for responsible research (see for example the European Code of Conduct for Research Integrity by ALLEA). On the national level, institutions like universities, governmental agencies, and certain academic disciplines are also introducing such codes.

There has also been an increased focus on the working condition and incentives of researchers. The publish-or-perish aspect of the life of a researcher is getting much of the blame for misconduct among researchers.

Some, like the European Union, promote openness as part of the solution to the integrity issues in research. By making data and research results openly available, researchers can check each other’s work more easily. This makes the research in question more credible, as it makes it more difficult to get away with cheating. Making research available in this way is also supposed to promote collaboration and efficiency. It makes it easier for researchers to work on the same problems as other researchers, and it makes it unnecessary for multiple researchers to collect the same data individually.

Open to the world

Open science is not only an internal effort, where researchers are open towards each other. Openness towards the world is also a part of it, for example under terms such as responsible research and innovation (RRI) and public engagement.

As we have seen, people do not blindly accept scientific and technical progress (Macnaghten & Chilvers 2014). Instrumental value is not sufficient when it comes to securing the public’s trust. People are not radical techno-optimists; they also care about whether or not researchers have the right intentions, they care about the trustworthiness of those involved in research, they care about the pace of technological development, and they care about the effects of technology on social justice. Knowledge about these factors is ordinarily inaccessible to the public.

In RRI and public engagement, one attempts to bridge the gap between researchers and the public. By giving the public opportunities to discuss technological progress with researchers, many of the worries a person might have when it comes to technological developments can be addressed. People will get insight into how research works, and what kind of people are involved. People will also be able to raise ethical concerns and tell researchers about their needs and expectations.

Researchers can then address these ethical concerns, and adjust the technological development to better meet the needs of the public. In this way, new technologies and research will be better received when introduced into society. Involving ordinary people in research also increases the cost of choosing irrationality as a means to keep one’s place in a social group. When one gives input to researchers, one gets a stake in the results, which increases the cost of distrust.

Never waste a good crisis

In sum, the measures mentioned in this post are changing research. What constitutes good and responsible research, and what it means to have integrity as a researcher, is being standardized and formalized in as rules and ethical codes. Systematic efforts, like RRI and public engagement, are promoted as means for securing trust in research, bringing the public and research closer together. Open-access ideals are also making research more open internally so that the internal self-regulating mechanism of research are enhanced.

While researchers may worry about low public trust, research misconduct and low scientific standards as revealed in methodological crises like the replication crisis, these worries are leading to better research. As they say, one should never waste a good crisis, and in this case, the crisis is leading to better, more open and more responsible results.


* The last time 2/3 of the American electorate voted was in 1908. The last 50 years, turnout has mostly stayed below 60%, and it has never exceeded 62,5% (


Fanelli, D. 2009. “How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data.” PloS one. 4 (5). e5738.

Greene, J. 2014. Moral tribes: emotion, reason and the gap between us and them. Atlantic Books Ltd.

Macnaghten, P. & Chilvers, J. 2014. “The future of science governance: publics, policies, practices.” Environment and Planning C: Government and Policy. 32 (3). 530-548.


Knut Jørgen Vie is a PhD student at the Work Research Institute (AFI) at OsloMet – Oslo Metropolitan University (formerly Oslo and Akershus University College of Applied Sciences), and part of the PRINTEGER-project. 

Photo: Ainar Miyata-Sturm

When Doctors Are Wrong, and Patients Are Right

Medical Mistakes

Doctors are not infallible.

They often make diagnostic errors. Though the incidence of such errors can be hard to measure, autopsy studies provide one metric that is hard to dispute: “major diagnostic discrepancies” were identified in 10–20% of cases (Graber 2013). Other types of studies find similar results (see Graber 2013).

In some cases, doctors are systematically mistaken about important medical facts. In one study, gynecologists were asked about the likelihood that a woman who has tested positive on a mammogram actually has breast cancer. They were presented with four alternative answers, one of which was correct, and they were given the statistical facts needed to calculate their way to the correct answer, so the task should have been easy.

Only 21% chose the correct answer, which means that the doctors did slightly worse than we would expect them to do if they chose the answer at random (Gigerenzer et al. 2008).

Should We Be Worried?

These facts are troubling. When doctors are wrong, the consequences may be severe. It is tempting, therefore, to react with a scathing criticism of doctors and medical education.

In part, this is warranted. The human tendency to crash and burn when faced with problems that require Bayesian reasoning, which is what foiled the gynecologists in the study above, can be corrected with proper teaching (Gigerenzer et al. 2008). Diagnostic errors that result from cognitive biases could be removed using formalized procedures such as checklists (Ely et al. 2011).

However, as long as doctors remain human, errors will occur. Moreover, since medicine is a field characterized by risk and uncertainty, focusing on individual blame for mistakes runs the risk of focusing on outcomes rather than the procedure leading to those outcomes.


Malpractice suits, which are the legal manifestation of such a focus on individual blame, are more likely to be filed when outcomes are bad, such as when someone dies because of a delayed diagnosis of cancer. The likelihood of the filing (in the case of diagnostic errors) increases with the severity of the outcome (Tehrani 2013). But a bad outcome does not automatically entail any error of medical judgment.

Any positive diagnosis involves a risk of overdiagnosing a healthy patient. Any negative diagnosis involves a risk of underdiagnosing a patient with a serious ailment. As both overdiagnosis and underdiagnosis can lead to serious harm, the trick is to balance the risks according to their costs and benefits, but there is no way to completely avoid the risk.

The Costs of Blame

One serious cost of blaming doctors for mistakes is the phenomenon known as defensive medicine. The harms resulting from underdiagnosis and undertreatment are usually much more spectacular and easy to understand than the harms resulting from overdiagnosis and overtreatment. This means that doctors can minimize the risk of being sued for malpractice by erring on the side of the latter. According to one estimate, defensive medicine costs the US between $650 billion and $850 billion annually (

Another significant cost of the focus on blame is the harm that befalls doctors. Being a physician is stressful. Depression and burnout are common, and the suicide rate among doctors is frighteningly high—41% higher than average for men and 127% higher than average for women (Schernhammer 2004). A likely contributor to this is the blame and guilt associated with making mistakes, or even with making completely justified decisions that, because they involve risk, happen to result in bad outcomes.

Less obviously, focusing too much on the responsibility of the physician obscures the fact that the institution of modern medicine tends to marginalize and overlook a significant healthcare resource: the patient.

The Doctor as Authority

Modern healthcare is still very much an authoritarian institution, where patients come in and are told what to do by the Olympians in white coats. Even title of “patient”, which you automatically gain once you enter the system, denotes passivity, someone “to which something is done” ( Doctors have access to a special set of skills and knowledge, which is demarcated by high social status and pay and often romanticized in popular culture. To a patient, the doctor is an unapproachable expert, one to which you listen, sometimes literally, on pain of death.

It is no wonder, then, that most of us are afflicted by what Wegwarth and Gigerenzer call the trust-your-doctor heuristic, which is the decision-making rule most of us follow in matters regarding our medical needs: consult your doctor and simply follow her commands (2013).

Because the gap in relevant knowledge between physician and patient is assumed to be astronomical, the responsibility for arriving at the right conclusions is placed squarely on the shoulders of the physician. Though the 20th century has given us the doctrine of informed consent, an institution intended to protect patient autonomy; the underlying picture is still that of a commanding doctor and consenting patient. By being bound by this framework, we risk losing out on the resources patients could bring to bear on solving their own medical problems.

Bridging The Gap With Google

As Andreas Eriksen discussed in his excellent post a couple of months ago, the advent of the internet and Google has increased the information easily available to the average person by several orders of magnitude. This means that the knowledge-gap between doctor and patient is less absolute.

No doubt it’s true that a doctor with Google is better suited to diagnose and propose treatments than most patients with Google. However, it is also true that most patients spend a lot more time thinking about their medical condition, their symptoms and how it affects their life than their doctors do. A doctor can’t spend hours researching on Google every consultation, and they cannot routinely monitor their patients as they go about their daily lives.

Every patient should be considered an expert on her circumstances of life. More and more, the medical knowledge they can muster through the use of Google and other resources should be taken seriously. When combined, these two insights make a good argument that a healthcare model based on the idea that responsibility and authority in medical matters should belong solely to the physician is obsolete.

Taking The Patient Seriously

The involvement of patients in medical decisions should not be regarded merely as matters concerning the protection of their autonomy but as an important part of improving the medical decisions themselves. Through the last couple of centuries, medicine has seen a gradual shift towards a focus on the patient in several ways, through informed consent and, more recently, the ideal of shared decision-making. This is a trend that should continue.

Doctors are sometimes wrong. Patients are sometimes right. On an authoritarian model, the instances where these situations overlap will result in doctors overriding their patients’ correct judgments with their own mistaken ones. In an ideal situation, a patient’s correct judgment should correct the doctor’s mistake. Taking the patient’s resources to make medical decisions seriously should be a step towards achieving this ideal.


Ely, John W., Graber, Mark L. & Croskerry, Pat. 2011. “Checklists to Reduce Diagnostic Errors”. Academic Medicine. 86 (3).

Gigerenzer, Gerd, Gaissmaier, Wolfgang, Kurz-Milcke, Elke, Schwartz, Lisa M. & Woloshin, Steven. 2008. “Helping Doctors and Patients Make Sense of Health Statistics”. Psychological Science in the Public Interest. 8 (2). 53–96.

Graber, Mark L. 2013. “The incidence of diagnostic error in medicine”. BMJ Quality & Safety. Online First.

Schernhammer, Eva S. & Colditz, Graham A. 2004. “Suicide Rates Among Physicians: A Quantitative and Gender Assessment (Meta-Analysis)”. The American Journal of Psychiatry. 161. 2295–2302.

Tehrani, Ali S. Saber, Lee, Hee Won, Mathews, Simon C. Shore, Andrew, Makary, Martin A., Pronovost, Peter J. & Newman-Toker, David E. 2013 “25-year summary of US malpractice claims for diagnostic errors 1986–2010: An analysis from the National Practitioner Data Bank.” BMJ Quality & Safety. 22. 672–680.

Wegwarth, Odette & Gigerenzer, Gerd. 2013. “Trust Your Doctor: A Simple Heuristic in Need of a Proper Social Environment”. In Simple Heuristics in the Social World. Hertwig, Ralph, Hoffrage, Ulrich & The ABC Research Group. Oxford University Press.

Authors comment: This post was written after binging a season of “Doctors vs. Google” (originally: “Hva feiler det deg”) the Norwegian TV series that pits a team of people without medical education, but with access to google, against a team of doctors without google. The task: to correctly guess the diagnosis of people based on a brief anamnesis and some rounds of questioning. Andreas mentions the show in his post, which is where I found out about it, and it is worth watching, as it’s both entertaining and a fair showcase of the potential (and the limits) of what patients can achieve with the help of google. Though the doctors often come out on top, this is probably in part because in the weightiest task point-wise, a time constraint means that there is almost no time to use google.

Ainar Miyata-Sturm is a PhD student at the Centre for the Study of Professions (SPS), and part of the project Autonomy and Manipulation: Enhancing Consent in the Health Care Context. He is also the editor of Professional Ethics.

Photo: Sonja Balci

Should We Improve Informed Consent Through Non-Rational Intervention?

Why Informed Consent?

A wide swath of human activities require informed consent (shorthand for informed, voluntary, and decisionally-capacitated consent), including employment, medical care, medical research, professional relationships, and so forth.

Respect for autonomy and the value placed in liberal societies in allowing the pursuit of rival conceptions of the good life is what underlies this requirement. As Kant famously put it, people should have “freedom to make public use of [their] reason in all matters […] for we are all equipped to reason our way to the good life” (Kant, I. “What is Enlightenment?” Political Writings. Cambridge University Press, 1991).

A central plank of liberal political philosophy is that infringements of autonomy for any reason, such as paternalistic intervention for the agent’s own good, are entirely unacceptable unless they prevent greater harm to others.

But how good are people at reasoning their way to what constitutes “the good life”, to identify the behaviours enabling them to achieve the ends at which they aim, and acting as they themselves believe they ought?

Evidence of Irrationality

In fact, psychology and behavioural economics have accumulated plentiful evidence that our judgement and decision-making are flawed and biased in predictable ways:

We are subject to cognitive biases that limit our ability to assess evidence. We are motivated by incentives but we have a stronger aversion to losses than we have an affinity for gains. Our acts are often powerfully shaped by emotional associations and influenced by subconscious cues. Hyperbolic discounting frequently interferes with our ability to effectively pursue the goals we are setting for ourselves (many believe it plays a role in substance addiction and also explains the obesity epidemic).

In short, the catalogue of cognitive distortions and volitional pathologies is vast and ever-growing. There can be no question of their significant welfare-reducing effects, not only on the lives of individuals, but also on society.

It is not surprising, therefore, that policymakers, employers, insurance companies, researchers, and health-care providers are increasingly interested in the application of various forms of interventions based on psychology and behavioural economics to affect people’s decision-making with respect to health-related behaviours, lifestyles and habits.

Correcting For Biases

Here is an interesting question for professional ethics: Should we also extend the application of such interventions to the informed consent process in order to enhance comprehension or convey information better than what would occur through standard communication?

Should, for example, researchers be allowed to use non-rational interventions to produce changes in the affective states of research subjects in order to manage inflated expectations of benefit (therapeutic overestimation) or conflation of trial participation with care (therapeutic misconception)? Should doctors be allowed to “nudge” their patients to make “better” choices, e.g., use deliberate framing to induce false beliefs in these patients in order to make them choose a medically needed treatment they otherwise would not have chosen?

Those who claim that they should not, argue that such interventions are manipulative rather than respectful of subjects’ autonomy; that they exploit people’s flawed methods of reasoning or decision heuristic; that they elicit irrational rather than rational decision-making, and that they break the bond of trust in the professional-client relationship.

Those who claim that they should be allowed, argue that, as long as they promote the welfare of the subjects and exert their influence without blocking choices or increasing the cost of any of the alternatives, they neither threaten autonomy nor rational decision-making.

Who is right?

A Third Solution?

One simple thought might be: neither. What might be needed is an account of which non-rational interventions do, and which do not, violate the obligation to give subjects a fair opportunity to give autonomous consent. A fundamental problem might be that the analysis of the basic notion of “undue influence” itself is impoverished.

In what way does such influences interfere with autonomous agency and voluntary decision-making? It is difficult to find any shared or well-developed model of such interferences in the ethics literature. What we need is an analysis of this notion that makes explicit its connections with non-autonomous decision-making. Only then can we hope to be able to determine what role, if any, “nudges” and other interventions can legitimately play in enhancing consent and decision-making in the health-care context.


Edmund Henden is a professor at the Centre for the Study of Professions (SPS), where he currently heads the project Autonomy and Manipulation: Enhancing Consent in the Health Care Context.

Photo: Ainar Miyata-Sturm

Professional ethics in the age of AI: Upgrading to v3.0

Doctors versus Google

Can a team of laypeople armed with Google beat doctors at diagnostics? That is the premise of a Norwegian TV show that has won international acclaim. Doctors are seemingly happy to participate and defend the honor of their practice. But the very fact that this is a realistic challenge is symptomatic of a more general and fundamental shift in the traditional power base of the professions. Developments in the field of artificial intelligence and the proliferation of online services are making accessibility of knowledge less dependent on traditional modes of professional practice. I believe this calls for a new perspective in professional ethics that takes these shifts seriously. As I will explain, “professional ethics version 3.0” may be an appropriate term for this upgrade.

“Increasingly capable machines”

The developments that necessitate this new perspective in normative theorizing are vividly portrayed in Richard and Daniel Susskind’s book The Future of the Professions (2015). They argue that technology is dismantling the monopolies of the traditional professions—for the better. In what they call our current “technology-based Internet society,” new ways of sharing expertize are refashioning public expectations. The book presents telling numbers on how artificial intelligence and online services are outcompeting traditional practices of providing academic courses, medical information, tax preparation, legal advice and more. Tasks that have been performed by professionals are taken over by “increasingly capable machines” that allegedly deliver services cheaper, faster, and better.

Normative theorists need to consider what these findings and predictions imply with regard to standards of professional role morality. Given that we are facing complex and fundamental change due to the possibilities of artificial intelligence, theories of professional ethics need to address how this alters the ground for legitimate public expectations and the conditions of trust. In particular, how does technological change in practice affect the merits of professional decisions and actions?

Professional ethics before AI

The call for a “third version” of professional ethics may sound hyperbolical, but let me explain how it relates to two previous stages. Version one concerned individual professionals. The early professional ethics codes were highly aware of how the behavior and values of the single role holder reflected on the public standing of the profession as a whole. Although this aspect has never disappeared, we can speak of a second stage (version two) when organizations and their procedural regulations gained more attention. This has been called “the institutional turn” in professional ethics (cf. Thompson, 1999). While organizations have always shaped professional practice, the appreciation of their significance for professional responsibility was gradual. The question now is how the swift arrival of artificial intelligence and new modes of sharing expertize changes our moral relation to professionals.

Philosophers should work in tandem with sociologists here. In this regard, consider how a call for a transition from version one to version two was foreshadowed in sociological writing. Thirty years ago, Andrew Abbott noted in his cornerstone contribution to professional sociology—The System of Professions (1988)—that the public approval of professional jurisdictions rested on outdated archetypes of work. The professions want to appear as virtuous, but the public image of the virtuous professional did not really track institutional reality. Abbott drew attention to how the public continued to think of the professionals in the image of a romanticized past: “Today, for example, when the vast majority of professionals are in organizational practice, and indeed when only about 50 percent of even doctors and lawyers are in independent practice, the public continues to think of professional life in terms of solo, independent practice” (p. 61).

When machines become professionals

How is the third version special compared to the previous two? One important distinction is how the third version is gradually dispelling the social logic of ordinary morality, which arguably remained perceptibly intact even in the organizational setting. That is, the organizational aspect of professional practice does not by itself imply a radical break with the kind of interaction we are familiar with from the ordinary or non-institutional morality. There are still face-to-face interactions that enable immediate emotional responses.

Care, loyalty, and respect are key virtues of role holders in hospitals or classrooms. They are also concepts that most clearly apply to the relations between agents who encounter each other directly. To care about patients or pupils, for example, seems to involve being concerned about the condition of concrete individuals, as opposed to more abstract categories. Similarly, loyalty to clients often requires attentiveness to how needs and interests are expressed (how they matter to this client), not just mechanical subsumption under institutional rules. Moreover, respect for autonomous decisions requires that conditions are present for making a professional judgment about relevant agent capacities of the decision-maker (e.g., understanding, free deliberation).

A natural question, then, for those who have worked with ethical theories for traditional practice will be how the old concepts translate to the new scene. What happens to the values of professional practice that were grounded in genuine human engagement and direct emotional participation? Susskind and Susskind are not worried about this; they believe machines will become better than humans to engage with understanding and empathic emotions (2015, p. 280). But whatever the technological realism of this stance, there is reason to stop and consider the conceptual difficulties it faces. We appreciate sincere expressions of empathy precisely because they communicate genuine like-mindedness. Many of our emotional reactions are tied to ideas about human dignity, fellowship, and mutual respect. We might have to find a new moral base for our interaction with machines. My suggestion here is that the third version of professional ethics needs to explain how the traditional moral concepts change meaning and significance when professional work is being gradually decomposed into more specialized tasks where new technology takes over old tasks.

New standards for professional practice?

A professional ethics for the new age is not just about the substance of norms and emotions, but also about how the standards for this normative order are derived or constructed. That is, even the basic sources of legitimate professional standards may be changing. Professional associations have traditionally developed their codes through appeals to the “internal” or “intrinsic” values of their practice. Some may hold that radical change in this regard is called for by the opportunities of technology. Technology may not merely be a vehicle of diffusing information; it may entail a form of “democratization” of the legislative process for professional norms. For example, one could argue that what is needed, for the most part, are efficient systems for registering user contentment. Now that people are being serviced in greater numbers at greater distances, the argument goes, the important thing is getting tools for aggregating satisfaction and adjusting the systems accordingly.

I believe, to the contrary, the standards of professional ethics cannot be reduced to aggregating satisfaction. It is a mark of professional integrity to resist pandering, to aim to rectify self-serving beliefs, and to making decisions responsive to genuine professional values. While some choice-friendly aspects of the new systems can overcome pernicious forms of paternalism that were made possible by traditional practice, there is still a need to allow professional judgment to be a counterweight to mere user satisfaction.

What machines can’t do

One reason for emphasizing the need for professional judgment is the lack of collaborative ability in machines. There is no mutual agreement on the appropriate end to pursue; the machine cannot adequately make normative assessments of the cognitive processes of others and it cannot place goals within a larger space of meaning (a lifeworld). The machine basically aids us in achieving our ends as they are, with at most a weak ability to interpret our situation or make counter-suggestions. In short, machines do not understand us and do not engage with us to determine our goals. This is a point argued at length in Steven Sloman and Philip Fernbach’s The Knowledge Illusion (2017). These cognitive scientists are skeptical about the potential for automated services to replace professional judgment. One of their findings is that using services like WebMD has the effect of raising people’s confidence in their own level of knowledge, without raising the actual level of knowledge accordingly. People tend to have rather a blurred sense of the distinction between what they know and what knowledge is available.

What does this mean for professional ethics?

None of the above is an argument against letting technology change professional practice. It is rather a point about how a theory of professional ethics can highlight considerations to which the new system needs to respond. The professional practice of the “technology-based Internet society” should be reformed in light of the genuine virtues of professional ethics, not vice versa.  While it is important to understand the gains in efficiency derived from compartmentalization, standardization, and automatization, it is also necessary to operate with an adequate conception of what kind of efficiency we should strive for. This does not just require the participation of practitioners of good judgment in the development of the systems. It also requires that theorists of professional ethics help articulate public frameworks for identifying the new ethical challenges that arise.


Abbott, A. (1988). The System of Professions. Chicago: The University of Chicago Press.

Sloman, S., & Fernbach, P. (2017). The Knowledge Illusion. London: Macmillan.

Susskind, R., & Susskind, D. (2015). The Future of the Professions. Oxford: Oxford University Press.

Thompson, D. (1999). The institutional turn in professional ethics. Ethics and Behavior 9(2), 109-118.


Andreas Eriksen is a Postdoctoral Fellow at ARENA Centre for European Studies.

Photo: Private

Conference on the Theory and Practice of Informed Consent

Hi all,

Here comes some exciting news:

Next month (June 8th and 9th) there will be a conference on the Theory and Practice of Informed Consent at Oslo and Akershus University College of Applied Sciences.

Many international researchers will hold talks, and judging by the abstracts they have sent in it looks like we are set for a stimulating and perhaps provocative couple of days.

If you are impatient and want to see the whole program for the conference, full abstracts etc. you can click here. Otherwise, read on for a brief digest of what we have in store.

Medical ethics

The medical context is often central when talking about informed consent. Since this one of my main research interests, I am happy to say that this will be the case at the conference as well.

Louis Charland (University of Western Ontario) will talk about how the psychological disorder Anorexia Nervosa could show us how too much concern for autonomy could be dangerous to certain vulnerable subjects.

Then Hallvard Lillehammer (Birkbeck, University of London) will perhaps strike a similar note when he asks whether the legitimizing power of consent always should be traced back to respect for autonomy.

Approaching the topic from a legal perspective, Henriette Sindig Aasen (University of Bergen) will look at the challenging case of childrens’ right to participate in medical decisions.

Research ethics

The first area where informed consent became a formal standard is research ethics following the Nuremberg Code, which was established as part of the judgment in the trial of the Nazi doctors in 1948.

In this light, Steven Edwards (Swansea University) will talk about how a weak version of the Humanity Formula of Kant’s Categorical Imperative (roughly: “don’t use people merely as means, but always also as ends in themselves”) is useful for thinking about consent in research ethics.

From the home field, Edmund Henden (Oslo and Akershus University College) and Kristine Bærøe (University of Bergen) will talk about whether addicts can give valid informed consent to participating in trials were they will be offered the drugs they are addicted to.

Neil Manson (Lancaster University) considers the proposal that biobanks should offer participants the opportunity to chose their own consent frameworks, and promises to argue against a practice of such “meta-consent”.

Professions and proffesional codes

The conferencewill not only be about informed consent: the second day will focus more on professional ethics in general.

Tor Halvorsen (University of Bergen) will  give a talk on the new ethical Challenges facing professionals given the new set of goals set by the UN to end poverty, protect the planet and ensure prosperity for all within a sustainable development agenda.

Finally, there will be a number of parallell sessions arranged by Profesjonsetisk nettverk (Network for Professional Ethics). The topic for these sessions will be Profession, Professionalization and Codes of Ethics, and there is an open call for papers which you might be interested in responding to, thought the deadline for submitting an abstract is Wednesday next week.

What’s not to like?

The conference is a part of the research project Autonomy and Manipulation: Enhancing Consent in the Health Care Context at SPS and is arranged in cooperation with Profesjonsetisk Nettverk. Here is the link to the full program again. If you have any questions, feel free to send me an email.

Oh, and you can let us know you’re coming by clicking attend on the facebook event we have created.

Or not—you’re welcome anyway.


I hope to see you there!



Welcome to Professional Ethics!

Welcome to Professional Ethics, a blog about just that: professional ethics. The blog will be dedicated to exploring  this relatively undeveloped  area of philosophy, and a long term goal is for the site to be a resource for professionals who want to know more about the ethical side of their craft. If you are interested in professional ethics, this is your site!

For now, the site is quite sparse, but rest assured—more is coming.  In the meantime you can check out our About page, the Guidelines or the Events calendar. If you would like to keep up to date with the blog, you can Subscribe (scroll to bottom of page) to never miss a post.

Our plan:

The aim of the site is to be a friendly and accessible place to present and discuss interesting philosophical problems and ideas within the topic of professional ethics. We want the bar for participation to be low so that ideas that are not yet fully developed can be discussed.

The main content of the blog  is intended to be short texts by philosophers, professionals and researchers from other disciplines working in the field of professional ethics.

Are you a philosopher, a researcher or a professional and would like to contribute to the blog? Check out the Write for Us! page and send me an email!

Who are we?

Professional Ethics is affiliated with the Centre for the Study of Professions (SPS) at Oslo and Akershus University College of Applied Sciences. SPS is a multidisciplinary research center where researchers from a wide variety of fields  study questions relating to the professions.

The site is edited by me, Ainar Petersen Miyata, a PhD Candidate at SPS and part of the research project Autonomy and Manipulation: Enhancing Consent in the Health Care Context. My main research interest is nudging and its relationship to autonomy and informed consent.


I hope you will find the blog both useful and enjoyable! If you have any sort of feedback, let me know.

All best,