Philip Sheldrake
1 min readSep 6, 2022

--

I'd love to jump on a call to discuss this space Paul. Perhaps via the Neighbourhoods project?

I cannot share your enthusiasm if only because I think this is even harder than it appears, and it appears hard enough!

Programmatic quantification of reputation (including trustworthiness) is an inevitable evil due to the unavoidable self-moderation and modulation it inflicts on its subjects beyond that which might be argued as 'good for society'. To be clear, the social accretion of local, contextually relevant reputation with forgiving opportunities for reparation has served communities for millennia. We are however considering universal, non-contextual and irremediable scoring and algorithmic assessment, for what mechanisms could we develop to prevent as much?

Moreover, the conceptualization of identity on which your system rests is pregnant with dystopian potential. Specifically: "Every member of a community has a continuous digital identity, associated with a unique username and profile. "

For more on the latter: https://sheldrake.medium.com/digital-identity-is-not-human-identity-and-that-matters-b2330fea9630

--

--

Philip Sheldrake

DWeb | Web 3 | Systems thinking | Sociotechnology | Unnamed Labs | Generative identity | Open Farming | The hi:project