The working world of tomorrow

Sooner or later, artificial intelligence will be found in all companies and to very different degrees, and it will find its way into the world of work. But how does it affect jobs and employees? And what are the opportunities and risks of AI in recruitment? Find out from Rosmarie Steininger, founder and CEO of CHEMISTREE GmbH and Clemens Suerbaum, Chairman of the General Works Council of Nokia Solutions and Networks GmbH & Co. KG.

Title Working World AI Podcast

Mrs. Steininger, your company CHEMISTREE is on the move in the matchmaking field - also with AI - what exactly do you do?

Rosmarie Steininger: The name Chemistree means: If the chemistry is right, then something can grow and bear fruit, like the tree. For us, it's about the chemistry between two people. We do matchmaking in a professional context. We support companies or networks and use our mentoring programs for this purpose, for example. Which mentor is best suited to which mentee? Or for leadership sparring. One executive has a challenge, another has a solution. Who's the right match? Or for onboarding, for example. New employees come into the company. What's the quickest way to find people they can get along with and who can help them get started? We solve such matching cases for our customers.

How can artificial intelligence help here? Or do you need your human intelligence above all else?

Rosmarie Steininger: Of course, both help. Also the human intelligence does not hurt there. But if you have, for example, more than 100 people that you want to put together tandems, then you already have some 1,000 possibilities of who could work with whom there. And if you then have a few criteria - 100 to 200 criteria are not uncommon for us, with different interests, backgrounds, regions and personal preferences. Then you end up with a few million arithmetic operations that you would have to do to find out who fits to whom particularly well.

And algorithms can do that better than humans. Humans are good for that, for making this setting in the first place: What questions should I ask? How should I weight which questions? What is important? What needs to be in there? But the algorithms are good for bringing the processing power.

That is, the first thing is still the human being, who has to think about it at all. What do I hand over to the AI?

Rosmarie Steininger: Exactly. I believe anyway that the interaction of humans and AI or algorithms is of course where the most comes out. For leadership sparring, for example, it's important to be able to choose which area the other person comes from. This has to be included in the questionnaire. We always tailor this concept together with our customers. After that, the algorithms come into play and take over the so-called hard work.

When it comes to artificial intelligence, you always hear the term black box. You don't really know: what's in it? Who is controlling in the background? What does the AI do?

Rosmarie Steininger: The first question is, what is artificial intelligence anyway? According to the new EU regulation, the definition is very broad. So under it falls everything we do. There is also a narrower definition, which says that artificial intelligence is anything that is self-learning and maybe not quite comprehensible anymore. With that kind of AI, we handle it very carefully and look closely at what kind of things are happening. Because undesirable effects can also happen in the process.

Three things are important to us when we implement our projects:

  1. It must not be a black box, but should be very transparent and understandable. We explain everything we do down to the last algorithm and for as long as the clientele wants us to.
  2. We are very careful not to include biases or then be aware of the biases. There are sometimes programs where you want bias. Every quota, for example, is a distortion. But then you have to make that very clear and transparent and know what you're doing. And then of course it is also permissible.
  3. Our participants use our solutions very self-determined. They know at all times what data they give away and what happens to them.

And only then is it, in our view, actually good AI and good AI solves problems.

Mr. Suerbaum, you are chairman of the general works council. Now matchmaking like this could definitely be interesting for the application process and for personnel deployment within a company. How do you deal with it as a works council?

Clemens Suerbaum: Currently, I would say, the best reaction is to say: "Hooray, AI!" Because then we are entitled to an expert according to the just out Betriebsrätemodernisierungsgesetz. We don't even have to discuss that at length.

But I think it depends heavily, as in the risk-based approach from the EU, what you want to do with AI. I think with the topic of personnel selection, you're already on thin ice in terms of candidate selection for an assessment center.

At Nokia, it goes something like this: We have virtualized everything at the moment. When we invite dual students, we have a group of observers who ask these young students about their ideas at our company.

In contrast, AI would launch a big survey on the web: What do I get all about this guy or that guy? What is being posted? The AI tries to create a personality picture about it. Another AI could then in turn write an application in such a way that the chances of being hired are significantly increased. In the end, it becomes a battle in which no one really wins. You still don't know anything about the person applying. The impression of whether the person is a good fit for the team is something that observers have more readily than an AI.

I see common ground between the two of you. It needs the human component also with artificial intelligence.

Clemens Suerbaum: Your introduction at the beginning, that the working world will be characterized by AI in the future, scared me a bit. I hope it will be characterized by more humanity.

I just read a ticker, they reported about an AI workshop. One of the projects mentioned was AI counting trees on aerial photos. So where currently people are walking through the area and laboriously making a tally. If you then find that out with the help of the image analysis by the AI, then that is also a great relief for the people who are working there. And the people can be used elsewhere, for example in tree planning or replanting.

Is that perhaps exactly the kind of graft, Ms. Steininger, that you mentioned earlier? You first have to put a lot of input into it, to think about what do I need? And then targeted there the AI on it to give perhaps also again a bit more freedom?

Rosmarie Steininger: I think that is very important. So what do I want to do and what of it do I leave to the AI? That combination is important. But also, what is the object of what I am actually looking at? If I'm counting trees, then I've done very well if I hit the right number, if I can also maybe determine the size or something.

I would just be very careful with actual AI based on probabilities or based on large patterns when it comes to people. Because a human is a single person with very individual preferences. There's such a big difference. People are not trees. It is important that you catch exactly what the person needs, wants and is able to do in the context of personnel selection.

On the subject of assessment centers or AI: of course, a person can manipulate or distort just as much as a deterministic algorithm or an AI. You have to take that into account and be sensitive to it from the start. The result should be normative and technically as good as possible.

Clemens Suerbaum: With these selection mechanisms, an important question is also where does the training data actually come from? How can you decide, for example? I just read an article about how AI can be manipulated in a targeted way: e.g., through a training as a service, more precisely a data training as a service. The data is given to a service provider who creates an algorithm. As an example: a credit allocation based on all kinds of data, such as address, age and income, and perhaps also the circle of friends. Now the service provider can manipulate the whole thing in such a way that if, for example, a small a is added after the house number, the loan is approved for the wrong person.

Such risks are not even on most people's radar. They think that this will make everything easier for them. This is where you have to pay attention because it's a new field. This approach to risk counts for both the human side and the business side. +

What is the situation here in Europe and Germany? The General Data Protection Regulation, the big issue, especially when it comes to personal data, is out front. How is it with AI? Masses of data are processed, including personal data, especially in your case, Ms. Steiniger.

Rosmarie Steininger: It depends very much on the context in which you use it. In the per-sonal area, there is so far practically no regulation. Such a regulation is now supposed to come with the new EU regulation, as personnel selection in it is a high-risk area.

I have the impression so far that some are not even interested in what happens in the background. With the buying companies I experience quite often that it is ultimately about a quick sorting out. Nobody pays attention to whether it makes sense or not.

Clemens Suerbaum: I'll make a rash judgment: I think that some things deliberately move in the gray to black zone. On the other hand, if you tell people beforehand for what purposes this data is, then they can agree or disagree. Then the data has been explicitly collected for that purpose. That's how CHEMISTREE does it, for example. It is a legal violation to take collected data without prior consent. However, this unauthorized use of data is a common practice because it is easy and creates benefits.

How can we deal with linked data pools? How do we manage to get into the exchange in advance? What exactly is AI supposed to do? What do you need for that?

Rosmarie Steininger: I think different sides have to move towards each other. Software companies need to be understandable and transparent. Software companies and their clientele must move towards each other.

With the new EU regulation and the accompanying standardization, it's about what things need to be disclosed, what actually means understandable and how it comes to a decision.

Mr. Suerbaum, you said at the beginning that you get an expert. Can you make it so easy?

Clemens Suerbaum: The mistake is often made that technical terms such as neural networks, machine learning, etc. are used in an inflationary manner. You have to bring it down from this technical level so that everyone can have a say. Works councils can then discuss things that affect (future) employees very well. Mrs. Steininger has just echoed this: This will to participate is important. If I want to have the use of AI explained to me, then it shows a willingness to try something new. Only with trust and transparency can the works council and management see and use the benefits of AI.

The interview was conducted by Christoph Raithel, Team Leader Event at Bayern Innovativ GmbH. Listen to the full interview as a podcast here:

.

length of audio file: 00:24:12 (hh:mm::ss)

AI in the World of Work 4.0: Entrepreneurial Opportunities & Ethical Issues (01/16/2023)

The work world of the future will be shaped by artificial intelligence . But what does this mean in detail for companies and employees? Christoph Raithel talks to Rosmarie Steininger - founder and managing director of CHEMISTREE - and Clemens Suerbaum - general works council chairman of Nokia Solutions and Networks GmbH & Co. KG, to shed light on the issue.

Bayern Innovativ News Service

Would you like to receive regular updates on Bayern Innovativ's industries, technologies and topics? Our news service is the right place for you!

Register now free of charge