Domain-specific training: The holy grail for ChatGPT & Co.

Ausschnitt der Publikation, auf dem Bild ein Raum mit vielen Robotern, im e-PIAF-Stil

It is still to be proven whether ChatGPT & Co. will really be of significant benefit to the economy and public administration.

The reason: Large Language Models will only be able to take over human tasks on a large scale if they can be trained on (complex) specialised knowledge and then deliver around 70-80% correct output and at the same time recognise the incorrect 20-30% as incorrect and do not output it (i.e. if they are not hallucinating).

Christian R. Ulbrich and Burkhard Ringlein get to the bottom of why it is so difficult to get LLMs to stop hallucinating - what the challenges of so-called domain-specific training are and whether/how this can be possible - in an article for the November issue of SWISS ENGINEERING (download).