Katrin Unger: On the move in matters of diversity & leadership at universities

KI_im_Gespräch1

AI has now become an integral part of our lives and work. The crucial question is not whether we use AI, but how: Can we use it as a tool that supports us without losing control? How can we succeed in working with AI – and not for it? It remains to be seen in which areas AI will actually make us smarter and where it might even hold us back. What (new) skills do we need to confidently handle AI?

A conversation between Katrin Unger and our new team member Hendrikje Brüning about her experiences, challenges and strategies as a consultant and teacher.

Between fascination and skepticism: Dealing with AI

Katrin: You advise and support organizations in change processes and are also a lecturer in human resources management, organization, and leadership. How do you currently experience dealing with AI in these roles? What are you particularly interested in?

Hendrikje: When it comes to dealing with AI, I’m always torn: I think it makes no sense to refuse. At the same time, I’m surprised by how little skepticism some developments trigger in many people. Even though I’m not an AI expert, I’m confronted with dealing with it in all my roles. A meme sums up my feeling: “I want AI to do the laundry and dishes so I have more time for cool and creative things, but right now, AI is doing the cool and creative things, and I’m still doing the laundry and dishes.”
This concerns me: How can I use AI for myself without putting myself at its service? In my view, it is crucial that we retain control and consciously decide when and how we use AI, and also when and how we feed it with our data. Only in this way can we prevent ourselves from being guided by algorithms and automated decisions without understanding or questioning their impact. It’s about strengthening our own skills and using AI specifically as support—not as a substitute for critical thinking, creativity, or human judgment.
It’s important to continually ask ourselves: Where is AI helpful, and where should I remain skeptical? For me, it’s important that AI remains a tool that supports us but doesn’t undermine our independence. I consider this crucial for society: If we learn to use AI consciously and thoughtfully, we gain freedom. If, on the other hand, we use AI without thinking about it for everything, we run the risk of falling into an unreflective dependency. We risk losing our creative power and important competencies.

Social narratives and the power of AI – how can a critical discussion and constructive use be achieved

Katrin: In a world where AI generates, filters, and disseminates content—and thus increasingly shapes our social narratives—what skills do we need, from your perspective, to distinguish between real, created, and manipulated? How can we critically question these new realities and actively shape them?

Digital skills and media literacy

Hendrikje: The so-called future skills are already being mentioned, a lot of them centered around digital skills and dealing with change. I think, at the individual level, you also need basic media skills and – unfortunately – a basic understanding of technology. I say “unfortunately” because many people have little desire for that. But if I don’t even begin to understand the technology behind the applications, I don’t know what’s possible. Then I have a hard time assessing what I’ll be confronted with: Who’s doing the public work? What agenda and what technical capabilities are they using?
Sometimes I’m surprised by how little attention is paid to the selective and self-reinforcing nature of algorithms on social media – even though it’s mostly known. If I believe this is reality and not just a frighteningly small part of it, then I classify things differently.
Basic knowledge of media psychology is also important: What effect does it have on me when I see and hear certain things over and over again? They become more believable, even if I’m initially skeptical of them—this is the so-called truth effect or illusory truth effect. I also consider this important when dealing with AI, because there, too, the technology learns and selects in the background.

Katrin: This is where we, as consultants, are called upon to raise awareness, strengthen skills development, create spaces for critical debate, and actively shape developments. Whether by supporting AI transformations in organizations or through new formats for “AI prototyping” that enable playful, yet consciously critical, experimentation and reflection.

Critical thinking as a key future skill

Katrin: I still remember our first discussions about AI, including at our learning workshop last summer. There, you particularly emphasized critical thinking as a key future skill. How can this ability be strengthened when dealing with AI – in education, consulting, and organizational development? Are there any approaches that particularly appeal to you? What makes you optimistic, and what concerns you?

Hendrikje: In addition to the future skills mentioned, you need a willingness to continually make yourself uncomfortable—that’s what I mean by critical thinking. I’ve seen statistics showing that only a third of people who use AI check whether the AI’s suggestions and results are truly viable. The rest simply adopt them. I consider that a careless approach, albeit understandable. It’s more convenient to adopt things, especially when they confirm what I already thought. That worries me, because that way we reproduce and reinforce the same answers and narratives over and over again. Especially in a world where we’re flooded with information and everything is communicated very quickly, the quality suffers.
I see potential and opportunity in learning to continually ask questions and critically examine things. For me, drawing conclusions is therefore an important part of critical thinking: What happens if I accept what I’m given here? If I continually consider this, I can correct, validate, question, and contextualize things. This broadens my perspective rather than restricts it.

Human knowledge in the age of AI

Katrin: You emphasized the importance of self-determined use of AI. How do we manage to build and maintain what you like to call a “broad knowledge map” as AI systems take over ever larger parts of our information processing? What role will human knowledge play in the future—and how do we protect it from disappearing behind seemingly perfect system responses?

Hendrikje: AI has great potential to provide information and details, but we should be able to keep the context in mind. AI should be seen as a sparring partner in developing ideas, not as a replacement. We don’t have to know all the details by heart—AI can provide that—but we do need to know what to look for and what to ask. We need to recognize connections and challenge the AI. This can then be enriching, and such an approach is also helpful for consulting work. For example, if a team in an organization wants to develop a new diversity strategy, AI can support the provision of current studies, best practices, and legal frameworks. The consultants and stakeholders can then critically examine which recommendations and information are truly discrimination-sensitive and inclusive, and ask specific questions about whether marginalized perspectives are sufficiently considered or where the proposed measures reproduce existing exclusions or prejudices. In this way, AI can serve as a sparring partner, while the conscious, discrimination-critical classification and further development remain the responsibility of humans.

A group of students in one of my courses recently discovered this: They used AI to find and understand relevant studies and reflected on the fact that they need to be able to formulate precise questions and instructions – otherwise, the answers they receive are unsatisfactory. Their conclusion was: They need to be able to provide the right stimulus, and for that, they need an overview. And they still need to know enough to assess whether what the answers say is correct. And I think that’s an important point for us: What do we need to be able to ask precise and good questions? For this, too, they used AI partly as a conversation partner, but also repeatedly linked it with other sources.

Overall, I think that in the future, we might be more likely to broaden our use of networked knowledge, and AI can provide more in-depth details and suggestions for further contextualization. But that also means: I need overview knowledge. I can’t provide input into AI if I don’t know the context, and I have to recognize clues in the AI’s results.
Another point for me is the moral compass. AI provides answers, but doesn’t question itself. That remains a human area of knowledge: asking what isn’t (yet) included in the answer.

Risks for vulnerable groups and social justice

Katrin: At compassorange, diversity is a cross-cutting issue for us as consultants and in all our formats. Accordingly, a diversity-sensitive perspective is important to us. From my perspective, AI has the potential to create visibility for underrepresented perspectives and experiences – but at the same time, it carries the risk of further rendering them invisible. What risks do you see for vulnerable groups if AI systems shape social realities based on data that reflects historical disadvantages? What would have to happen for the ongoing digital transformation to not simply reproduce existing power relations but to lead to more, rather than less, participation and equity?

Hendrikje: As mentioned many times, the question always is what AI draws on for results and how it handles them qualitatively. It’s highly likely that it draws on what occurs frequently and in large quantities. It’s important to consider who is contributing what? And again, the attitude of critical thinking: What’s missing? How can I ask about it and challenge the AI?
These are very fundamental ideas that have already been mentioned many times. At the same time, I observe that organizations using AI don’t yet do this routinely. The AI’s own critical ability will also continue to develop, but I have to demand this too for AI to learn. I think this is a first step toward ensuring that there is diversity of perspectives and not a limitation. AI primarily reproduces; I still have to classify—even if that might happen in sparring with the AI.

Katrin: Dear Hendrikje, as always, it was a pleasure to discuss AI with you. Thank you for your thoughts and input!

Conclusion

Dealing with AI in a self-determined manner means using it as a tool—not as a substitute for our own reason. Media literacy, critical thinking, and a moral compass are essential for actively and responsibly shaping digital transformation. compassorange supports organizations and teams in developing these skills and mastering the transformation together.