top of page

Information Trustworthiness

Not so long ago, the internet and social media promised us the ultimate freedom of expression. Technology seemed to finally fulfil the dream of equal voice for everyone. Anyone connected to the internet can publish anything and instantly reach a global audience. It seemed to be the perfect antonym of an Orwellian vision.

In retrospect, it sometimes feels like we were naïve children dragged into an enormous social experiment with questionable outcome. All sorts of players all over the world use the very tools intended to give us a voice to manipulate us; their weapons are lies and deflection.

While a newspaper can never be completely objective, good journalism is still the best filter for manipulation attempts. Unfortunately, we are increasingly losing that filter. Many people get their information almost exclusively from social media which are notoriously prone to manipulation attempts.

The same sources that spread the manipulation are arguing that the mainstream media is suppressing their message because it does not fit the narrative of an alleged establishment, further undermining the trust into traditional journalism. It is a perfidious and extremely successful strategy. Alternative facts creating a post-factual world.

In many cases a more fact-based version, or at least a relativization or different viewpoint, would be one internet search away. However, how many people do this extra step to double-check an information if the average attention span is not much longer than the time it takes to flick a finger?

Automatically assessing information trustworthiness might very well be the holy grail of information technology in the 21st century. Project Samarai has certainly not a straight solution but we believe that graph languages and Havel can at least propose a path forward.

Havel will allow any information to be digitally signed, be it just a single expression or a whole book worth of information. Not just a simple signature by the author but multiple, overlapping signatures by different signers each with a different set of significances. For example, a journalist can sign a story as the author, a source inside the story signs a single quote as approved and the media outlet publishes the story with a signature from the editor. The article itself can contain (or reference) other stories that each come with their own set of signatures. Third parties can sign the story as recommended. This also goes the other way; someone can sign a story as disapproved, so to say giving it a negative review. Signing an information can mean qualifying it and the signature itself, to a certain degree, semantically merges with the information.

Digital signatures are always tied to an entity that is part of the infoverse: A person, an institution, a company, etc. Additionally, signatures have a designation that specifies in what context a signature is valid. Therefore, digital signatures, together with their designation, semantically extend information; they qualify it and give it semantic weight. If you trust the signer, you might have reason to trust the information.

Havel expressions can be enriched with a host of additional information. Quotations, reference citations and reference pointers, to name just a few. Dynamic trustworthiness qualifiers are another method to enrich information; they serve the purpose to directly qualify the trustworthiness from a specific point of view (author, institutional, scientific consensus, etc.). Every annotation in Havel is a separate information that itself can be qualified and annotated. For example, a quotation can have an accuracy qualification.

As any information, people and entities (content providers, institutions, publishers, media outlet, etc.) can also be qualified. Every user can have his own trustworthiness qualification or use a predefined set provided by an institution he trusts. Theses qualifications are not necessarily just simple scores but can be dynamic expressions that can itself consider the information being qualified.

There can be multiple sets of trustworthiness qualifications for signers. If, for example, a user trusts a certain publisher in the context of scientific publications, she can give him a high score for that context. If the same publisher has in general an adverse opinion in political publications, the score for political publications might be different.

Using these annotations, enrichments and digital signatures, the Havel-interpreter can try to assess the trustworthiness of an information not just in general but also from different, opinionated viewpoints. Even more, it cannot just asses the trustworthiness but it can also assess the quality of the assessment itself, e.g. tell how confident it is about an assessment.

This approach might come with a whole set of new, maybe still hidden dangers. It might be prone to manipulation (if such a mechanism exists we need to assume that there will be attempts of manipulation) and it requires well-qualified information for it to work properly to begin with. If it would be regularly used by the public, it might create new challenges for digital encryption key management. How accurate the assessments will be in real-life, where information quality is not always optimal, is also still to be determined.

 

Comments


RSS Feed

Categories

Recent Posts

bottom of page