ABOUT
Heimdall is safeguarding our internet, with your help
Our timeline, motivations and predictions for the future
ABOUT — STATEMENT
What is Heimdall?
Our core ethos stems from the notion that all knowledge should be shared, and no knowledge should be hidden. Heimdall is an Automated End-to-end Content Verification System, designed for businesses to protect the integrity of their website's content whilst informing consumers of content authenticity and potential misinformation.
AI tools carry a huge risk of regurgitating training data when generating text, with no one the wiser. That is, until people (and Search Engines) start to notice the familiar content and start questioning your business' integrity. Before you know it your business' reputation is not what it once was, and neither are your site's search rankings. But what if we told you that there is now a way to keep the efficiency of AI tools, whilst minimising the negative consequences associated with it?
People need to know where information is coming from, and it is not always obvious. Businesses benefit immensely from transparency. The issue with source disclosure is that it is not scalable or precise enough to do so manually. Heimdall will remove that burden from your shoulders.
ABOUT — TIMELINE
Concept
We aimed to find a way to tackle misinformation created by AI, but without punishing people for using AI tools.
Testing
We found that businesses and users alike benefitted when focusing on content quality.
Future
Due to the fast-paced nature of the AI industry, we will constantly be tuning our algorithms to stay ahead.
Design
Focusing on transparency, Heimdall was designed to emphasise how important content source disclosure is.
Result
With this new system to verify AI content, websites can avoid facing potential consequences of AI non-disclosure.
ABOUT — CORRELATION
Correlation between AI and Plagiarism
Plagiarism comes in many forms, sometimes harmless, other times business-threatening.
The mass adoption of AI tools means that duplicate and regurgitated content can make its way anywhere, with minimal oversight. But it's much faster to produce content using AI. There is no doubt that amongst all of the AI content being generated, a good portion is plagiarised. This might mean your content. The implications of plagiarised content could set a business back many hours of manual review and, potentially, expensive lawsuits. Now this may not seem like an immediate issue, and you could be right. This issue has no set date, it could be tomorrow or even next year.
But can you afford to take the risk of leaving plagiarised text unchecked?
ABOUT — MOTIVES
Helping to create a hopeful future
Don't get us wrong, we are not condemning the use of AI to generate content. Modern AI is remarkable. We used it a lot in the development of Heimdall! However, we have noticed that the quality of content on the web is slowly diminishing, and it will come to a point when we will no longer be able to tell the difference between what's Human and what's AI. But what if there was a way to keep the efficiency of AI tools, whilst removing the negative consequences associated with it?
We aim to minimise, or at least reduce, the role AI plays in the 78 billion USD damage caused by gross misinformation. AI is perfect for its speed and efficiency when facing complex tasks, but Human content is much purer quality. All we want to do is disclose the sources of content, in order to reduce the legal implications and to inform people that what they are reading may be AI generated, so that they can make their own decisions. We are not punishing AI use, we are protecting the integrity of the internet.