Best practices are clear: test, audit, train teams, provide transparency, and maintain human oversight.
Best practices are clear: test, audit, train teams, provide transparency, and maintain human oversight.
In recent months, the debate surrounding Artificial Intelligence (AI) has gained new prominence in the world of work, particularly in the area of recruitment.
While companies are exploring these tools to make processes faster and more efficient, many candidates are already using AI to prepare their resumes, cover letters, and even mock interviews.
The question is: Are we really prepared?
From the candidates’ perspective, AI promises speed and personalization. It’s increasingly common to see resumes structured by AI tools, with optimized descriptions, adapted keywords, and even storytelling created to highlight professional experience. However, just as we can present ourselves more professionally, we can simultaneously become less authentic. In the near future, it wouldn’t be unreasonable to expect almost “perfect” applications, visually appealing and linguistically and strategically crafted by technology.
However, if candidates already use AI, the inevitable question is: will recruiters be prepared to identify to what extent an application, by itself, reflects reality?
More than ever, recruitment will have to (return to) focusing on validating authenticity – not just what is written, but what is demonstrated: in interviews, practical tests, assessment moments, and a candidate’s ability to handle the unexpected.
But the discussion is not only technical. There are also questions of principle: Who assumes responsibility when an algorithm eliminates someone without explanation? How does a candidate contest a decision they don’t know was made by a human or an automated system? And if the technology is wrong – who is held responsible? What place is left for intuition, for life history, for “context”?
The State Budget reinforces the message: the country wants productivity with dignity. But how can we reconcile these priorities with automation that, if poorly managed, can dehumanize the first contact in a work relationship? Best practices are clear: test, audit, train teams, provide transparency, and maintain human oversight.
Ultimately, the central question remains: will we use AI to make recruitment fairer and more humane? Or just faster and more impersonal? Because true “intelligence” in employment and in companies still lies with those who know how to ask questions before automating answers.

