We interviewed Manuel Palomar, a prominent Doctor and Bachelor of Computer Science from the Universitat Poltècnica de València. Currently, University Professor in the area of Languages and Computer Systems at the prestigious University of Alicante, where he has been teaching since 1991 and is a member of the ValgrAI Joint Research Unit.
Throughout his career at the University of Alicante, he has held various leadership positions, including that of Rector from 2012 to 2020. Before this, he held important roles such as Vice-Rector for Research, Development, and Innovation, Director of the Department of Languages and Computer Systems, Subdirector-Coordinator of the Higher Polytechnic School, and Subdirector of Informatics at the same institution.
In addition to his achievements in university administration, Manuel Palomar has also been a prominent figure in the academic and scientific field. He was the president of the Spanish Society for Natural Language Processing (SEPLN) from 1996 to 2006 and a member of the Board of Directors of the Confederation of Scientific Societies of Spain (COSCE) from 2004 to 2009.
Dr. Palomar’s teaching focuses on fundamental areas such as analysis, design, administration, and exploitation of databases, data science, information systems, and, in what we will focus on in this interview, Natural Language Processing (NLP).
Regarding his research, Dr. Palomar has been a reference in the field of NLP. His studies have focused on topics as relevant as information retrieval and search, automatic summarization, intelligent information analysis, text mining, machine learning, and knowledge discovery, among others. These research efforts have applications in various sectors, including healthcare, tourism, and the legal domain.
Manuel Palomar is also the driving force behind the Digital Intelligence Center (CENID), a center that promotes the understanding of digital intelligence as the set of skills and knowledge necessary to address the digital revolution taking place in our society. He advocates that technology and information must be accompanied by key values such as education, culture, law, ethics, employability, connectivity, accessibility, solidarity, and sustainability, as these are essential for facing a true social and economic transformation.
Undoubtedly, Manuel Palomar’s experience and vision in the field of Natural Language Processing and digital intelligence are invaluable for understanding and harnessing the potential of these areas today. We are excited to share his knowledge with our readers and hope that this interview will be inspiring for those interested in the fascinating world of NLP.
About Natural Language Processing (NLP)
Natural Language Processing (NLP) is a branch of artificial intelligence that enables, among other things, the automatic exploitation of the vast amount of textual and spoken information available to us due to digitalization. The analysis of this data would be unmanageable without the support of technology. NLP falls under the umbrella of language technologies, which also include machine translation and conversational systems.
NLP is an interdisciplinary field that sits at the intersection of Computer Science and Linguistics. Its main objective is to develop components and sets of software systems designed to process human language and enable the analysis, understanding, and generation of natural language. The most significant challenge in NLP is the ambiguity and complexity of natural language, which is a crucial aspect of human intelligence. This is why NLP has always been an integral part of Artificial Intelligence. The goal of Natural Language Processing is to simulate this human intelligence of comprehending and generating language. This capability encompasses both written and spoken aspects of natural language. However, in the context of Language Technologies, the NLP line of research focuses only on text processing, while the oral part falls under the category of Conversational Systems.
What led you to become an expert in artificial intelligence and natural language technologies?
This journey began after I completed my final year project in 1989, where I developed a rule-based morphosyntactic parser based on human language technologies. The project was directed by Lidia Moreno and carried out at the Universitat Poltècnica de València.
The relationship between natural language technologies and artificial intelligence is primarily due to two reasons. Firstly, in 2014, we were able to experiment with Deep Learning in Spanish. While the theory had been known for many years, there were no high computational capabilities to execute it effectively. Secondly, supercomputing played a vital role. The combination of Deep Learning and supercomputing has significantly advanced two fields within artificial intelligence: natural language technologies and computer vision.
The applications that have been most prominent in recent years and have had a significant impact on the digitalization of the mentioned sectors are virtual assistants, commonly known as chatbots, which, in my opinion, are becoming increasingly widespread. Machine translation is also a crucial application without a doubt. Additionally, there are other applications like text simplification, which allows vulnerable individuals to access information through digital accessibility, and text summarization, which applies text generation technology to summarize large amounts of content. These applications are already being commercialized and have an impact on businesses, both for large and small companies.
Language technologies have the potential to impact multiple sectors, from machine translation to customer service. What do you think will be the main changes we will see in the next few years thanks to these technologies?
Well, I believe the main change will be the democratization of artificial intelligence and, of course, the democratization of language technologies. Large companies will be able to use artificial intelligence and language technologies without the need for significant financial investments to apply them or implement them in their businesses.
Specifically, for language technologies, we have witnessed the emergence of ChatGPT, and more recently GPT-4, along with many other language models that will have a significant impact across various sectors and businesses. We are already witnessing their use in many industries, and in the next few months, not even years, we will see astonishing growth. We will see multilingual language models, and they will no longer be captured with millions or billions of parameters, but with trillions of parameters, thanks to advancements in supercomputing. Therefore, in the field of language technologies, we will witness a significant change in the development of multilingual language models.
If someone wants to venture into the field of language technologies and artificial intelligence, the main advice I would give, which I also share with my students and those who ask me about language technologies, is to start with proper training.
For example, the Curso de Experto en Procesamiento del Lenguaje Natural (PLN) that we are going to carry out with ValgrAI is an excellent way to get started in language technologies. We have expert teachers in the field and the support of several of the most important public universities in the Valencian Region. This course provides a deep and practical understanding of NLP, covering everything from theoretical fundamentals to advanced applications in various sectors. For more information and to enroll, I recommend visiting the Curso de Experto en Procesamiento del Lenguaje Natural (PLN).
There are also master’s and doctoral programs at different universities, where great experts work in language technologies. But undoubtedly, I believe that tools like ValgrAI are a great opportunity to provide continuous training advice to those who want to get started in this type of technologies. I recommend that anyone who wants to start or specialize in these technologies do so. For now, I would say that we have only seen the tip of the iceberg, and in the coming months, we will see great development from the multilingual language models that can be carried out.