Sharing Data with Welcomer

Entities in Internet systems connect with data elements. In a simple case, a system connects A with B using a data element X. Both A and B store the data they transfer. With Welcomer a copy of X is stored with either A or B or with Welcomer. Let us assume A connects to C with the same data element X. Let us assume B connects with the same data element X. This creates two further copies of X. Welcomer links the copies of the data element X as in the diagram.

If we now connect C and D with X then this is done by linking a new copy of X between C and D.

Either entity to a connection can specify with whom they are willing to share X and the applications that are allowed to access X.

When a connector X is made each party independently decides the meta-data for X within its system. There is no need to agree on names or meanings.

Each party alone determines what applications can access X.

A, B, C and D are autonomous and changes in X may or may not change the shared connector value.

Low Cost Envestment Loans with Welcomer

Michael Hudson in his book - Killing the Host: How Financial Parasites and Debt Bondage Destroy the Global economy gives good evidence that the financial system is killing the real economy.  Envesting Loans replace bank debt and remove the interest charges taken by Financial Parasites.  We do not need to create money to transfer value over a period of time.  That is what Envesting loans do and in so doing change the study of economics. Envesting Loans or variations on them will prevail because they reduce the cost of transferring value by 98%.

From the business directory

A loan is a written or oral agreement for a temporary transfer of a property (usually cash) from its owner (the lender) to a borrower who promises to return it according to the terms of the agreement, usually with interest for its use. If the loan is repayable on the demand of the lender, it is called a demand loan.

Envestment Loans are:

An electronic agreement for a temporary transfer of value from a lender to a borrower who promises to return value in form of goods and services.  The borrower may return extra value according to the terms of the agreement. 

Envestment loans are independent of each other. This means any change to any loan has no built in dependencies relating it to any other loan.

Cash Loans have strong dependencies across loans.  Inflation of money value impacts all loans.  Interest on one loan, particularly government loans, impact the interest on all other loans.  Interest paid on Cash Loans increases the amount of money exchanged. This in turn impacts other loans.  Banks create cash (money) when they lend it. They destroy it when it is repaid. The creation and destruction of cash impacts all other cash loans. Promise Theory tells us that the cost of coordinating "agents (loans)" increases exponentially when the agents have strong connections.

Cash Loans are high cost to operate. Envestment Loans are 2% the cost of Cash Loans. The cost of Cash Loans is the cost of interest paid to the bank. This cost comes in higher borrowing costs and lower savings returns for the same amount of money lent. 

Envestment Loans use the same Welcomer technology that Welcomer Identities use.  The sum of the loans of which an individual is part becomes their net money worth.  That is, each individual has a distributed ledger that combines all their loans.

To see examples of how Envestment Loans can work visit


Low Cost Identities with Welcomer

Most systems identify people via identity providers. Another entity gives a person an identity.  That identity is now used by others. For example, the government provides us with a many identities and we use them with others. This approach applies to any identified thing. So we identify houses, dogs, companies, etc.  We give them an identity that others use.

With Welcomer the control of the identity is with the identified entity. It happens through mutual identification.   Two entities connect and in connecting they identify each other.  Neither know what identity token the other gave them.  One of these persons now connects to another person.  They identify themselves with mutual identification. Welcomer connects the two identities and remembers the connection. A person now says their identity is the set of links of their individual identities with other entities.  The details of their identity remain with them and with the other entities. A person or a thing can have many different graphs where there are no common links between things.

This approach is low cost because each entity identity is independent.  Here is an explanation.

Identity systems using identity providers are costly.  In Australia we have spent hundreds of millions of dollars on a Health ID system and it still does not work well. We have spent hundreds of millions on passports, Licences, MyGov's, Single SigOns at all levels of government trying to provide people with identities. The costs are escalating with the DTO and their equivalents in the States.  There is a better way.

Using the Welcomer approach the cost of an identity is low.  It is less than 1% of the cost of using an identity provider.  The cost of an application or service using an identity can include the identity cost.

The Internet of Things can use the same identity system.  This gives a common way for all things including people to interlink and identify themselves.

This increases the value because it allows the easy integration of things with people.  A car has an identity and its identity, with all its history, can be passed on to a new owner with the same identity system.

Rent and Buy

Do you want to live in a country where housing is affordable?

Do you want a safe place to invest your savings?

Do you want to make it difficult for Real Estate speculators?

Do you want to see fewer people without homes?

Do you want to pay less to buy a home?

If you want any of these then please support Rent and Buy.

Rent and Buy is a new type of loan.  If you can afford to rent a home you can afford to buy it. 

Most people buy a home by borrowing money from a bank. Savers put money into banks who then lend it to home buyers.  Borrowers use the money to buy the house from an owner and repay the bank loan. To make sure they repay the loan the bank has a mortgage on the house.

With Rent and Buy a person buys money from Savers not a bank. Banks, Governments and other organisations can assist Renters borrow from Savers. Or a Renter can borrow from a Saver.  Instead of the bank holding a mortgage on the home the Saver holds the mortgage.

Rent and Buy removes the cost of inflation and the cost of interest on interest. Rent and Buy financing and operating costs are about half the cost of bank loans. 

Renters and Savers share the savings.  

If you support Rent and Buy we will ask banks, governments and others to offer Rent and Buy loans.

To support Rent and Buy visit

How it works

An example: If you have a $300,000 loan over 20 years at 4.5% Rent and Buy will save you $24,000.  If the interest rate is 6% you will save $102,000.

Another example: For Savers a $300,000 Rent and Buy loan held for 20 years will return an extra $62,000 over bank interest.  This assumes an inflation rate of 2% and a bank interest rate of 6%.  With regular loans this money often goes to finance speculators with accounts in overseas tax havens.

Renters must live in the home they are buying and must have title to the home. Savers are people who have savings they want to invest. They purchase mortgages tied to a given home.  The Renter builds equity in the home with a deposit and with their regular repayments. If the home is sold then both the mortgage holders and the Renter share in any Capital Gain or Loss. Typically a loan has many Savers. Often there are many Renters like a couple with the title in their joint names.

Each Loan is tied to a particular property. A holder of a mortgage can sell their share of the loan quickly and easily to another Saver at any time. Most Savers will receive their money back as an annuity.

The contracts for borrowing and lending are almost the same as existing contracts.  The main difference is that a mortgage may have many owners all with equal rights. For tax and accounting purposes Rent and Buy loans look like secured loans.  Savers leave the interest with Rent and Buy and the interest is lent to new Renters.

If you are a Renter you pay 5% or more of the total value of the home each year. At 5% it will take you a maximum of 34 years to buy your home without a deposit.  The 5% can be lower if you have significant equity in the home. You can pay off the loan and accrued rent at any time for no penalty. The home is revalued each 12 months. If you cannot pay your rent you vacate the home so someone else can Rent and Buy it.  The new person living in the home gets the title to the property. You now become a Saver and own a part of the Mortgage on your ex home. This means if you have to sell you do not lose the equity you have built up in the property.  You can use your equity as a deposit on your new home or you can sell your part of the Mortgage to another Saver..

Rent and Buy does not own any property.  All unlent Savings are held in Trust Accounts. Financial Institutions sell Rent and Buy services and handle the day to day customer questions and queries.  Rent and Buy software matches Renters with Savers and handles the Mortgages and Payments.  Rent and Buy charges a fee each time rent is paid or Savings are deposited. Financial institutions charge fees for selling the service, arranging the legal documents and for handling customer questions and queries.  

A person can be both a Renter and a Saver.  A person can borrow money to buy a house while allowing others to use their savings or super to buy a different house.

Rent and Buy provides low cost loans.  This happens because we have invented a way to reduce the risk of funding. We have also invented a way to provide a secure private system that renters, savers and operators can understand and monitor. For the technical minded Rent and Buy is a distributed ledger of all the transactions involving the home. Each Renter and each Saver has 24 hour access to their accounts.

Here are some typical situations where you might use Rent and Buy

  • If you wish to own your own home then Rent and Buy is low cost and low deposit.  The rental rate is 5% annually of the value of the Rent and Buy home. 
  • If you own a dwelling and want it to sell it and get an inflation adjusted 7% fixed income on the value of your property then Rent and Buy is an option.
  • If you have an existing loan switching to Rent and Buy will almost always reduce your total payments.
  • If you own a home and want to live off money you have saved buying the home then Reverse Rent and Buy is an option.
  • If you have a self managed super fund and would like to invest in property then Rent and Buy Mortgages is an option.
  • If you have savings and want to get an inflation adjusted 7% fixed income on the savings then Rent and Buy Mortgages are an option.

If you are a Renter you can calculate your yearly rent as 5% of the value of your home.

If you are a Saver you can calculate yearly earnings as 7% of the inflation adjusted value of the amount of money invested.

The length of time to pay back a loan is calculated as.

V= Value of your home.

R = Repayments = V times 5%

L= Value of your loan.

X = L divided by R

Years to repay the loan =  X + (X * L * 7%) / R / 2

A $300,000 loan on a property of value $1,000,000 gives

V = 1,000,000

R = 50,000

L = 300,000

X = 300,000 / 50,000 = 6

Years = 6 + (6 * 300,000 * 7%) / 50,000 / 2 = 6 + 1.26 = 7.26

This blog post looks a little more deeply into the theory behind this new approach to Renting and Saving.



Reducing the Cost of the Financial System

The Financial System regulates the distribution of Capital. It does this through Money Markets. Economists justify using Money Markets as a regulator by assuming the Efficient Market Hypothesis holds.

This hypothesis says that an unfettered market is an efficient way to distribute resources.  Unfortunately, history shows us that large Money Markets are unstable and high cost. The Efficient Market hypothesis does not hold for large Money Markets.

There is a cost to all regulation. In Financial Systems it is in the form of value or costs.  Some regulatory costs in Financial Systems are:

  • Interest on Interest.
  • Interest on Money in bank accounts.
  • Inflation
  • Exchange Rate costs
  • Insurance overhead costs
  • Costs to transfer value
  • Overhead of interest and rental costs
  • Overhead of taxation for the redistribution of value
  • Legal and Accounting costs
  • Enforcement of Regulations
  • Overhead costs of markets
  • Proportion of the cost of Politics

In a modern economy these costs are high.  They are greater than 50% of the cost of producing goods and services. The costs are high because Money Markets are complex and unstable.  We can reduce costs if we reduce complexity and increase stability by introducing methods other than Money Markets to distribute Capital. It is likely that these methods will rapidly replace most Money Markets as the infrastructure to support them is low cost, Most of the existing technologies that support Money Markets is reusable with the new approaches.

Other Models - Control Theory and Cybernetics

Cybernetics is the study of regulatory systems.  Regulatory systems have closed signal loops. The systems cause a change in the environment. The change in the environment feeds back into the system. This in turn triggers another change in the system. This is a feedback loop.  

Negative feedback stabilises systems. Positive feedback causes instability because a small change creates larger changes with several passes through the loops. Maxwell published the first paper on Control Theory in 1867 titled "On Governors".  This quantifies what happens with both forms of feedback.

By removing positive feedback loops in Financial Systems we can increase stability. Two positive feedback loops in Money Markets are interest on interest and inflation.

Other Models - Promise Theory

Promise Theory tells us how to reduce complexity. Promise Theory says that reducing complexity reduces costs.

Complexity arises from the interactions of separate autonomous agents.  Complexity increases exponentially with the number of agents affected by any one change.  Promise Theory addresses this and shows how to build large systems with low complexity. It says that complexity occurs if changes in one agent cause a change in another agent.

Financial Systems are complex.  A change in the value of money held by one agent influences the value of money held by other agents.  The interest rate charged by a Central Bank changes the value of money held by all agents.  Inflation of value in one asset class causes inflation of all money.

The easiest way to reduce costs is to reduce the regulatory costs. The easiest way to do this is to reduce the cost of complexity. The easiest way to do this is to reduce complexity by isolating the effect of changes.

Applying Control Theory and Promise Theory to Loans

Two regulatory mechanisms in a Money Market are inflation and interest on interest.  Inflation facilitates the redistribution of money. Interest on interest compensates for inflation. These are real costs and removing them will reduce costs.  Both inflation and interest on interest are positive feedback mechanisms.  Both lead to instability in Financial Systems. Removing them increases stability.

We can remove costs and increase stability by changing the rules for the repayment of loans.  We use loans to invest in the future production of goods and services. Instead of paying interest on invested money we give a discount on the cost of future production.  This removes two costs.  It removes the cost of interest on interest.  It removes the cost of producing money to pay interest.  We compensate investors for inflation by adjusting the amount owing by the inflation rate. 

It increases stability because it removes the positive feedback of interest on interest and it removes the positive feedback of inflation.

This approach to loans makes each loan independent of any other loan.  Loans are only repaid with goods and services. The loans themselves are transferrable. Promise Theory predicts the isolation of Loans from each other reduces complexity. This reduces costs because many of the administrative processes we put in place to compensate for instability are no longer needed.

Investors need a return on money invested otherwise they will not invest.  The return should make the investment worthwhile and cover investment risk. The approach to loans suggested here does this.  It saves the borrower interest on interest and saves the lender the cost of inflation. 


Evolving a Stable Financial System

A Stable Financial System

A stable financial system is one where money retains its value. Inflation or shocks in one area of the financial system has little or no effect on other areas.  It is one where credit created for a project remains unaffected by financial events external to the project.

A short video summarising the benefits.

Promise Theory and Credit 

Promise Theory is a model of voluntary cooperation between individual, autonomous actors or agents. Agents publish their intentions to one another in the form of promises.

A description of Promise Theory from its author.

A promise is a declaration of intent.  A promise is not an obligation and is a weak connection between actors.

Credit is a promise to repay the receipt of value with something else of value.  Credit becomes an obligation when expressed in the form of a given amount of a currency.  An obligation is a strong connection between actors.

Promise theory hypothesises on the complexity and cost of connections between autonomous agents. It says weak connections cost less than strong connections when systems have many autonomous agents. It says systems with strong connections are brittle and subject to catastrophic failure.

We can apply Promise Theory to credit. If it holds then credit based on currencies cost more than credit based on separate peer to peer credits. 

Credit in the form of money debt has strong connections between each credit.  When interest rates change this changes all issued credit. Central Banks use these strong connections to control the money supply.  This makes the financial system of debt vulnerable and controlling the system becomes expensive.  An estimate of the cost is the cost of risk.

With traditional debt inflation is a cost to lenders.  The total cost of risk is the cost of inflation plus the cost of interest on interest.  Inflation is of benefit to borrowers. Interest on interest is of benefit to lenders.  With double entry book-keeping they cancel each other out but they are real costs.  For the economy as a whole they are additive.  An estimate of the cost of risk is inflation times two.  For a 1.62 trillion dollar economy with inflation at 2.5% this is $80 billion per year.

Removing Monetary Inflation

We remove inflation by removing interest on interest on loans. We do this by making each loan independent. We do this by changing the way we repay loans.  Instead of paying back interest first we pay back capital first.  Instead of the amount owing reducing because of inflation we increase it with inflation. We use the same interest rate while ever we owe money on a loan.

Change the way we calculate loan repayments and financial costs will stabilise. Because they are stable the costs of finance drops. We use the savings that comes from stability to do more. 

Government debt without compound interest

Government can eliminate money inflation on their loans by issuing transferrable rights to pay future taxes at a discount.  This money must be used to create new value.  It is invested to create new value. Here is a short video to explain the concept.

Each credit created using this approach is independent.  Inflation, or deflation, does not affect any individual credit repayment as the credit amount owing is adjusted. Governments or any other entity introduces new credit into the system to cover losses from unpaid failed credit. This controls the money supply.

The system is easy and cheap to introduce and it will scale.

It leaves all existing systems in place.  It gradually replaces existing loans where it makes sense to do so. All current institutions and loan arrangements continue. They are only changed when it makes economic and political sense to do so.

Promise Theory hypothesises the system will evolve towards stability loan by loan.

Evolving a Decentralised Identity System

A distributed identity system connects existing electronic identities. It evolves from the connections made with applications. Initially there is no decentralised identity. The identity system emerges from the use of applications that use the connections.

An electronic identity gives an individual access to information in a data silo.  A distributed identity system allows an individual to access the same data across silos.  Identification with one data silo gains access to the same data across all silos.

The diagram illustrates how a distributed identity evolves with Identic.  An individual has identities with three different organizations. It signs on to each organization with the same application.  When signed on the application makes a copy of the identity token used by each application. The identity tokens are linked and the links saved with the organizational copy of data elements.

At some later stage the person comes back to organization C with an application Y. They logon to organization C.

Application Y accesses the data element address in organization C.  Organizations A and B have agreed to allow application Y to access the address data. 

When the person logs on to organization C they have access to the address data held in A, B and C. They can do this because the copy of the identity tokens are linked and stored permanently.

The same data can now be accessed across data silos. The application that accesses the data must have permission from the data silo.  Permissions are granted by each data silo independently of any other data silo.

Other applications can use other data elements with other data silos.  Each silo makes its own rules on access.  Access rules are enforceable because the underlying linking software is open source. Only certified copies of the linking software and applications are permitted access.

An individual can also put restrictions on individual application access.

Access is granted through applications, not data.  Applications define the context for access. This permits the same data to be shared or not shared depending on the application.

Data remains in silos.  There is no impact on existing systems because copies of the data are linked.

Centralised Compared to Decentralised Identity Systems

Centralized Identity Systems

Identity systems provide organizations with tools to manage relationships with individuals.  The identity tools for a single organization are well known and work well. These identity systems are centralized around organizations. A way to provide identity management across organizations is to extend the scope of an identity. This means taking an existing identity service and making it usable across organizations.

This approach takes many forms. Some names are federated identity, single signon, an identity trust framework, user managed access, personal clouds, and openid. 

The approach has proven to be problematic, difficult and expensive to deploy. 

Why it is hard to integrate identity

A single identity across organizations means organizations have to cooperate. Organizations have difficulty cooperating on identity because relationships are central to organizations.  Organizations exist to allow individuals to cooperate to achieve tasks. An organization becomes the sum of its relationships with individuals. By using the same identity the relationships across organizations become one.  This means combining identities leads to combining organizations. Combining organizations is difficult.  It takes effort and the end result is normally one organization absorbs the other.  Until this happens both organizations become less effective and costs increase. After it happens economies of scale become less pronounced.

Economies of scale is the justification for combining organizations because it combines functions. Providing common functions across organizations is the justification for centralization of services. 

Economies of scale do exist; and they exist around specific functions.  Generally the production of any goods or service shows the experience curve effect. Each time cumulative volume doubles, costs fall by a constant percentage. The experience curve effect is a learning effect over time. It is does not come from combining existing services. 

Combining two organizations does not combine the two learning effects for a given service. Rather it is the opposite. The two organizations have to learn to work together. The cost of combining the output through integration is considerable. Once completed the rate of productivity improvement through learning drops. This happens because the starting volume is greater.

Centralizing services has the same effect. Instead of reducing costs it increases costs.  This happens because the centralized service creates a new service.  This new service has to integrate with or replace existing services. It now costs extra to do the same task. The rate of future improvement through learning drops because of the greater starting volume.

An alternative is to keep existing decentralized services and encourage the experience curve effect. This happens by giving existing services a way of incorporating learning from other services.

Decentralized identity Services

There is an alternative to a centralized identity system. This is to keep existing decentralized identity systems intact and unchanged and reuse the best parts of existing services.  An identity service consists of many micro services. We can make the micro services in one organization available to other organizations. We do this by allowing an individual to access a micro service no matter where it resides.

Improvements and learning happens through the increase in volume of service use.  Superior micro services will gain volume and increase their superiority.

For example one organization introduces voice verification. They make it available to individuals as a second factor for a given application through a micro service. The same individual can now use voice verification with all participating organizations. It does not matter what organization requires the application nor what organization hosts the service. From the point of view of the individual it appears the same. The best voice verification service of participating organizations will get increased volume and the learning curve effect improves the service.

The most common example of a shared micro service is data capture. Organizations capture data about a person. Capturing data is a micro service. It is now available to any approved application.  It becomes available to any other organization through the same application.  This applies to all data, to all applications and to all organizations.  This removes the need for an individual to provide data many times. This eliminates costs and makes systems consistent and easy to use.


Existing identity systems consist of many identity micro services. By making  them available to an individual allows consistent services to an individual. This reduces costs to the individual.  It reduces the cost to organizations because they can use other organization's micro services. It increases the value of the best micro services because the learning effect lowers cost.  

Making micro services available requires no change to existing decentralised identity systems. It achieves the goals of centralizing services without the costs of centralization.

A description of Identic

Written descriptions and stories are sequential.  Complex systems are parallel systems where many threads happen at the same time. Non cognitive thought is parallel but we describe in sequence. This leads to difficulties.  Our information systems work in parallel.  Different parts interact with other parts. It is this parallelism with interactions that means that information systems are indeterminate.

We design and build information systems as though they were sequential and deterministic. We believe that our programs are reliable and we can model the real world with sequences of instructions.  We  can't. Our models (computer programs) no longer reflect reality. 

What happens with our models (computer programs) is that they become the real world.  Our electronic identities become real.  The direction of most of the effort on identity has been to keep making identities "bigger".  We do this with single signons and with OpenID and with personal clouds. Making things bigger by using the same structure and connections - just more of them - does not scale.

The systems created with tools like CloudOS, formalised in Promise Theory scales in a different way.  Instead of doing same thing but bigger we find the atomic elements and combine the atoms in different ways.

A physical analogy from Promise Theory is the combination of Hydrogen and Oxygen atoms.

One of the stable ways we can combine Hydrogen and Oxygen is H2O or the water molecule.  There are other ways Hydrogen and Oxygen can combine.  For the purposes of this explanation we now think of H2O as being a stable molecule.  These molecules combine together. It does not happen by making bigger water molecules say H1000O500.  Rather H2O becomes the new "atom" and H2O molecules combine together. Depending on the conditions we get water, or ice, or water vapour.

In other words this is a scalable way of combining Hydrogen and Oxygen atoms.

Promise Theory describes how this is nature's general approach to handling complexity.  I think it may be a new theory of evolution; but let us not go down that burrow:)

How can we apply this approach to identity?

Let us define an identity atom as being the mutual identity between two autonomous entities.  An organization identifies a person and the person identifies the organization.  This is a username/password plus all the identity information passed between them. This now becomes the identity atom.  All existing electronic identies are the new identity atoms.

Identic is a way of linking these identity atoms with promises.  We expect to break some promises.  The set of identity links becomes another identity thing.  The Identic links create something analogous to water. It is a different form of identity.  The Identic links create stable polyhedra.  We can visualise this identity as a set of polyhedra.

We are not using CloudOS to build Identic but are using Scala as it is a declarative language.  Declarative languages enable us to specify promises. This contrasts with imperative languages where we construct sequential deterministic systems.

But, we must govern the organizations that build and maintain the polyhedra. To that end Identic is Open Source Software that helps ensure cooperative behaviour in entities that use the software.

Identic Reliability

Identic identities are reliable because of continuous checking.  Let us assume we have a person with identities with 6 organizations.  There five different data elements used for identification. The table shows the data elements in each of the organizations.

  1. A,B,C

  2. B,C,D

  3. A,B,C,E

  4. D,E

  5. A, B, C, D, E

  6. B, D, E

These are linked as follows

This structure has redundancy.  The values of A, B, C, D, E should remain the same.  The links should remain in place.  If any of the values are different they can be replaced with the most common value. If any of the links are missing they can be replaced.  The ordering of the links can be changed to make processing more efficient.  If any identity 1, 2, 3, 4, 5, 6 is removed the structure can be readjusted.  Any new node can be inserted without affecting the existing structures except the immediate links.  If all the data is missing it can be reconstructed from the organization’s copies of the data.

Identic continually monitors these structures and reports situations where there are variations and possible anomalies.  Most of these can be handled by a computer. Where a computer cannot handle it it asks the individual concerned to make a decision.

Each identity with each organization remains intact.  The loose connections between identities are structured as a set of polygons.  The polygons become a polyhedron.  This properties of the polyhedron assist in maintaining the new object made from the loose connections.  The loose connections are defined as a set of promises as in Promise Theory.  

This approach makes identity scalable while maintaining and enhancing privacy.

ProTip and Payments for online content

Chris Ellis has a great idea for content payment using bitcoin. It is a two sided market connected with Open Source Software and bitcoin.

Unfortunately I will not use ProTip in its present form because it uses bitcoins.  For various reasons I do not want a bitcoin account; but that is another story.

Important ideas in ProTip are:

  1. People and advertisers pay for the content - to the content "thing".  The content distributes the funds received to the parties involved in content delivery. 
  2. Content is in the form of files.
  3. ProTip open source software provides an algorithm to distribute funds collected. Using bitcoin means who pays and who gets paid are anonymous.
  4. The author and all those involved in producing the content get paid. 
  5. The content can pay other parties.  Taxes, hosting of content, open source software, and holders of funds can all get paid.
  6. Amount paid, amongst other things, depends on the number of copies made of the content. 

ProTip uses bitcoins for payment and for knowing how to distribute money collected.  At present ProTip does not collect from advertisers; but it could.

ProTip distribution algorithm is like the ant optimisation algorithm used by Google. 

Instead of using bitcoin ProTip could use accounts identified with Identic.  The funds move between trusted accounts.  These are identified with a blockchain. This fits into the regular financial system. All funds transfers are between anonymous accounts in the existing payments systems.  

This video of a Chris Lewis interview explains more.  For those most interested in content payment start viewing from 24 minutes and 30 seconds.  The first part describes fullnode.  Fullnode deploys bitcoin but if we do not need bitcoin we do not need fullnode.

Scaling Identity

Centralisation is the common approach to the provision of electronic identity.  We have single signon, OpenID, Personal Clouds, Google ID, Facebook, Apple ID and ID Cards.  All these use the idea that a person has a single identity.  We attempt to scale this by using the same ID with different entities.  However, our relationship with each entity with whom we interact is different and these differences have to be taken into account when we transact using a common identity.

In summary each application and each organisation has a unique identity.  The centralisation approach combines these identities so we have a single unique identity.  This does not scale.  

A solution to scalability is to use the ideas from promise theory and leave identity distributed.

Identic uses these ideas to give individuals a scalable identity.  Each individual makes promises about their identity to each other entity and application with each connection.  Identic gives the individual a way to weakly link the connections and so strengthen promises. The more identity connections a person makes the stronger each individual identity becomes.

Promise theory uses an analogy of combining molecules to explain what is happening.  We can think of a connection between two entities as being a single molecule.  Each molecule exists on its own.  Molecules can connect to other molecules.  The total combination of all the connected molecules becomes a new thing.  For H2O molecules different connections create ice, water, steam, or water vapour. 

When we connect elemental identities together we get different things emerging from the connections. We might get a social identity, a company identity, a government identity an anonymous identity.  Each of these is a different thing.

With electronic identities we connect elemental identities with applications using the "same" data.  The connections might be that the date of birth is connected in each of the elemental identities. This would be useful for a proof of age system. For a voting system we can make another set where electorate of residence is the common connected connected.  

The connections to create an emergent identity require both the data elements and the reason for connecting. This means Identic enables many emergent identities from the same data elements and the same connections.  So, we have a scalable identity system because we create new identities for any purpose.  Each emergent identity does not impact on other identities. Our identity for our age is unrelated to our electorate identity.  An application can connect these two identities to become a voting identity.  This new application has no impact on existing uses of our age or electorate identities.


Single SignOn Compared to Welcomer

The world of personal electronic identity is dominated by the idea of an Identity. A person's electronic identity is a single thing. Within a single organisation it is expressed by the idea of Single Signon.  Across multiple organisations it expresses itself as OpenId.

Welcomer has a different world view.  In the Welcomer world a person's electronic identity is the sum of their electronic relationships.  In each relationship a person has an ID. But, the ID is not an Identity. 

A recent discussion with a government official highlighted the difficulties of explaining Welcomer. The following summarises the conversation.  SSO is the traditional idea of identity and encompasses both SSO and OpenID. 


With SSO personal information moving between organisations needs an explicit agreement between organisations.  

With Welcomer an individual has separate agreements with each organisation. The individual moves their personal data between organisations. This requires no agreement between organisations.


With SSO privacy regulations means governments never release personal data. 

With Welcomer organisations enter into confidentiality agreements with individuals. These confidentiality agreements apply to both parties. This means either party can release data with agreement.  Doing this means compliance with existing privacy regulations.

Single Source of Truth

With SSO an organisation requires a single source of truth for any personal attribute. This implies the most efficient arrangement is for a person to have a single identity.

With Welcomer an individual has a separate ID for each application they use.  There is no one single source of truth or identity. Rather the individual links the same data elements across applications. This means an individual's Identity is the sum of their application connections.

Improving the user experience

With SSO external organisations are not interested in agreements on shared data.  The same is true of Welcomer.

But, all organisations want to reduce the effort of individuals to identify themselves. They want to reduce the effort of individuals to enter common data.  This happens with the Welcomer approach. It is more difficult with SSO.

Welcomer as a Keystone Technology

This post uses terminology from the article, Strategy as Ecology by Iansiti and Levien

Welcomer, is a KeyStone technology for platforms. The platforms build on top of any set of datastores and the applications that created them. 

Keystones can increase ecosystem productivity by simplifying the complex task of connecting network participants to one another or by making the creation of new products by third parties more efficient. They can enhance ecosystem robustness by consistently incorporating technological innovations and by providing a reliable point of reference that helps participants respond to new and uncertain conditions. And they can encourage ecosystem niche creation by offering innovative technologies to a variety of third-party organizations. The keystone’s importance to ecosystem health is such that, in many cases, its removal will lead to the catastrophic collapse of the entire system. (from strategy as ecology)

Welcomer takes Open Source Software (including our own) and creates Permanent Internet Identities.  An Internet Identity is the history of all relevant transactions made about an entity. The entity can be a person, an organisation, a house, a car, a hospital, a computer application, an industry.  The purpose may be applying for a loan, buying a car, getting a disease treated.  The transactions remain with the originating application which can remain the same. Welcomer adds a copy of data elements that the owner of the data agrees to share and the owner continues to own.  Welcomer connects the same data elements across the different databases. 

One important Open Source Software in Welcomer is that creates the "permanent web". Another is Gluu for a person to connect to databases. Welcomer will share income with all the Open Source Software Components used to construct Welcomer. 

Welcomer and its other Open Source Components are a Keystone technology. Because the software is Open Source it is difficult for Welcomer to exhibit rent seeking behaviour.  Other organisations can immediately take over the role of provider of Permanent Internet Identities.  Other organisations will provide Permanent Internet Identities in competition with Welcomer. Welcomer will seek ways to cooperate and link with other Permanent Internet Identity systems.

There are many platforms that can use Permanent Internet Identities. Governments could create a Health Platform to connect the health data about a person. Others are a Superannuation Platform, a Taxation Platform, a Renewable Energy Platform, an Education Platform, a Smart Cities Platform, a Child Support Platform, an Insurance Platform, a Credit Platform, a MediSave Platform, a Water Rewards Platform, etc....  In fact any area where government is an operator, coordinator or regulator.

Government, private industry, not for profits or cooperatives can all operate platforms. The data remains in the ownership of those who collect and make the data available.  There can be many platforms for the same function. All platforms can all link together via the Keystone Welcomer technology.

The approach fosters diversity and with it continuous evolutionary improvements in IT systems.  The Platforms do the same for industries the IT systems serve. 

The above gives the "big picture". The development path is through small steps using Minimum Viable Products.  Platform owners can supply development funds to independent application developers. They build applications for use by the platform owners.  The ownership and control of the applications remain with application developers. This means they have an incentive to develop and improve their applications.  

Funding Keystone Technologies

Keystone technologies are another way of implementing standards. Instead of standards there is an open source code base that is the standard. Cooperative ownership increases the chances of adoption of keystone standards.  This precludes the regular way of monetizing software through ownership. 

A way to monetize open source software is to sell deployments of the software.  Deployments come with support and reputation.  With Welcomer it is possible to prove that a deployment is genuine.  This protects reputation and value.  Payments can be licenses or, as in the case of Welcomer, through a share in application income.

Identic is a deployment of the Welcomer idea for personal identities. Identic, along with other open source software used in deployment shares in the income from applications., Verifier proof of income, are applications that use Identic. They pay group owned Identic to maintain and develop the cooperative standards.


Why Electronic Permanent Identities are Important

Each time we establish a relationship with an organisation with the use of a usercode/password we create a permanent electronic identity.  This works well when there are few relationships. It does not work well when we have many relationships and when we need to coordinate activities across relationships. In other words usercode/passwords are not scalable.

The use of standards such as OpenID are used to give a common method of connecting. OpenID remembers logons and allows them to be reused across relationships.  With Permanent Electronic Identities we extend the idea of OpenID to remember our interactions and to make this history available across all relationships.

Welcomer Permanent Electronic Identities achieves the objective of remembering all the interactions we have with each relationship and making the history available for future interactions.  This totality of the history of our interactions becomes our electronic identity.

We interact with other people, with organisations, and with things.  The totality of all our interactions can be included in our electronic identity.  On the other side of an interaction is another person, organisation or thing.  The history of our interactions is our electronic identity and the history of the interactions of organisation and things with other entities gives those entities an electronic identity.  Abstracting the concept of identity so that it encompasses all Things leads to operational efficiencies and to simpler interactions.

The history of our electronic actions is the raw material on which our permanent electronic identities are constructed.  These actions are represented as data held in databases.  Most of this data is held in indexed databases where the index represents the entity and the data associated with the index are the attributes that describe the action or transaction.

This approach to identity and to the linking of data is superior to other existing methods for the following reasons.

  • Distributed Ledgers are created automatically
  • Fast Identification via the devices we use are automatic (FIDO)
  • Privacy is built into the operation of the system
  • Existing and Future Transactional Systems (applications) are independent of Welcomer.

Distributed Ledgers

Definition of ledger: Collection of an entire group of similar accounts in double-entry bookkeeping.

Double Entry Book-keeping is a method of ensuring transactions can be proven to be true.  This is achieved by having two records of each transaction.  A distributed ledger is one where the ledger containing the transactions is held across many different databases.  The issue with distributed ledgers is being able to quickly and easily get the other double entry of the transaction held in a different database on a different machine with a different index.  Welcomer links both records of a transaction automatically so that the transaction double entry is automatic and easily confirmed.

FIDO - Fast ID Online

Fast ID Online is where a person identifies themselves to a device and the device now becomes the person's proxy.  Devices are one of the factors that can be used to identify a person. With Welcomer systems the use of the device is automatic. This means that FIDO is the default factor used for identification because it is fast and non intrusive and because people are able to control the use of devices.

Confidentiality by Design

Welcomer requires both parties to a transaction to have access to the transaction and both parties to give permission for the transaction to be shown to others.  These are the cornerstones of privacy and this permissioning is built into Welcomer.  (See Chain Link Confidentiality by Woodrow Hartzog)

Independence of Applications

Transactional systems built on top of Welcomer are independent of the databases and applications that create the data. This means Welcomer can change and not affect the operation of the transactional systems on which Welcomer, and applications that use Welcomer, depend.  It means the transactional systems whether or not they depend on Welcomer are decoupled from Welcomer.

Combining Change of Contact Details with Verification of Identity

Welcomer is able to rationalise a person's contact details across an organisation and at the same time verify their identity. When a person is asked to verify their identity all the existing contact details from all the databases within the organisation are collated and compared.  If the evidence from within the organisation of the identity of the person is sufficient for the organisation to have confidence in knowing the person they are marked as having a verified identity.  If the person does not have enough verified contact details to establish their identity they are asked to establish their bonafides with government sources by using the Federal Government Document Verification Service.  

Once a person has verified their identity they can now use their verified identity as a data source to verify their identity elsewhere.  They can also use it to keep their contact details up to date and to inform their original data sources of changes.

Suggested Approach

  1. Connect all existing member contact details across all internal data sources using OpenID signon.
  2. Enable members to correct their contact details across the organisation.
  3. With compliance staff establish rules for verification of identity from internal data sources
  4. Establish whether further contact details are required from external government or other data sources and ask the member to obtain them through Welcomer using OpenID signon.
  5. Make the connected data available for viewing, change and reuse by the member.

What the member sees

  • The member has logged on to an organisational application or directly to the Contact Details and Verification using OpenID logon.
  • The member sees all the contact details in all the different databases accessed by an OpenID logon.
  • The member is told whether they are verified. 
  • If not they are given a set of contact details systems where they may already have verified contact details. Initially this will be the DVS but in the future it could be any Welcomer Enabled database in any organisation.
  • The member is shown the possible list of organisations where they can verify themselves and they are asked to connect to these until they have sufficient verification information as determined by the organisation's compliance group.
  • The member is then asked if they wish to update any of their existing contact details in any of the databases and if so they update their details.
  • The member can use the same system to update their contact details at any time in the future

Why is this approach to Verification of Identity better than using an Identity provider

  • It is expected that most members can be verified from their existing history of transactions with the organisation.
  • Those that need external verification will require fewer paid checks from the DVS.
  • The system helps establishes internal consistency of contact details across the organisation.
  • The system provides an ongoing updating of contact details service
  • The system provides an ongoing confirmation of verification of identity and is the foundation for authentication of identity
  • The system provides a framework for a whole of member view and a method of getting the data for whole of member executive applications.

The steps to deploy the system

  1. Obtain the inventory of all databases with member contact details.
  2. Obtain a list of databases that currently have OpenID access.
  3. Establish the APIs between Welcomer and existing systems to move data and the rules around the movement.
  4. Establish with the organisation compliance staff what constitutes sufficient verification of identity
  5. Set up Welcomer as a Gateway provider to the DVS.
  6. Write the consolidation of existing contact details system and internal verification of identity.
  7. Write the extension to add in the DVS verification.
  8. Write th e systems to change contact details.

A Single Member View

Member organisations face a particular customer service challenge. You require  a unified view of a member’s data, but their details are spread across systems.  Without that information, you might be making decisions in the dark:  not knowing the full story behind each member’s needs. 

Conventional approaches

Traditionally, achieving a unified view has been tackled by either

    Merging the information into a single database, or

    Federating databases so they are all connected via a single signon.

Each solution has drawbacks.  Combining the applications into a single database creates security risks: a honeypot of rich member data.  Federated databases with single-sign on poses privacy risks: how do you control who sees each member’s information?

Both solutions are costly: they require large-scale changes to existing applications.

The Welcomer solution

Welcomer presents a more agile solution to achieving a unified member view.

You simply install a version of Welcomer on each database application. The software makes direct links between the same data items for the same member across each database. Each database owner has complete control over what they share.

Simple to implement

Welcomer integrates seamlessly into your existing databases.  The software is added to each data silo without changing existing applications, or requiring extra programming.  Once the data links are established, data is available to other applications through standard API calls. 

Because Welcomer has a smaller footprint than other applications, you can achieve a unified member view at much lower cost.

Better customer service

With more information at the ready, your organisation can do more to serve your members.  Eliminate having to ask members for information they’ve already provided. Make decisions with all the context. Ultimately, with Welcomer you can offer a better member experience.

Welcomer in brief

Welcomer’s key features are:

  • It requires no change to existing systems.
  • Substantially lower implementation costs than comparable systems
  • Open-source software is used, for business continuity
  • Enables easy extension to view member's data in in other organisations, with their consent.

About the Welcomer team

We’ve developed innovative solutions for Electronic Identity since 2004.  While working at Edentiti the Team won the European Novay Digital Identity Prize for the best new Identity Innovation of 2011. The Team has decades of experience in the design, development and commercialisation of Identity and other User Centric systems. 



Welcomer CEO: Kevin Cox (PhD, MSc, BE)




Mobile: 0413961090

Phone: 0262410647



Welcomer - a primitive prefrontal cortex for electronic memories

The most typical psychological term for functions carried out by the prefrontal cortex area is executive function. Executive function relates to abilities to differentiate among conflicting thoughts, determine good and bad, better and best, same and different, future consequences of current activities, working toward a defined goal, prediction of outcomes, expectation based on actions, and social "control" (the ability to suppress urges that, if not suppressed, could lead to socially unacceptable outcomes). From Wikipedia

What is Welcomer?

Welcomer is Open Source Software installed with data silos.  The installation of the software links the Welcomer code and a copy of data from the silo into the Welcomer network. This permits permanent links to be established to any other Welcomer installation.  It links data elements for an entity in one data silo to the "same data element" for the same entity in other silos.  This means executive applications have access to data about the same entity from many different applications.  It can provide "big data" summaries from other similar entities to help executive applications. Executive applications perform similar functions to the prefrontal cortex but for any electronic entity.

Examples of personal executive applications acting like a prefrontal cortex

Executive applications can perform the same executive functions as a prefrontal cortex for the electronic memories of any object. The memories are electronic data stored in data silos that result from existing functional applications, like paying a bill, searching using search terms, or recognising a face in a photo.  The functional applications provide the raw material for memories.  Welcomer links memories across applications across organisations.  This allows Welcomer to differentiate among conflicting data across applications. It enables applications to reflect and simulate what will happen with other applications if certain actions are taken and for entities to compare themselves to others using the same application.

A simple example is when you change your name as a result of deed poll or marriage ceremony an executive application can alert you that your name must be changed in other places and assist you do it. 

An executive loan application can help you obtain a loan. You can hypothesise a loan amount and the application can check if the loan will cause conflicts with existing loan agreements. You can be given a strategy to achieve a loan. This might take various forms of finding another way to achieve the purpose of the loan, reducing expenses, increase income, or restructure existing agreements.

You can be alerted when someone tries to steal your identity. A transaction may be delayed or stopped or observed when someone, purporting to be you, uses a new device to try to conduct a transaction 1,000 miles from where you have just logged in to another application. 

Data about any entity can be collated.  We often think of applications as being about people but any entity can be reasoned about by considering data from multiple sources and applications. There are records about a house scattered over many data silos.  The house might be alerted to some maintenance that needs to be carried out.  The house can then alert the person responsible and let them know options.  The sales data about other houses is available to the house owner to help them make buy or sell decisions.

Organisations are entities and Welcomer assists organisations with their existing executive applications.  A common executive application is getting a "single view" of a customer or member.  Organisations have many diverse operational applications that contain personal data. If Welcomer is used to give each individual a view of their own data then an organisation also gets a single view of that customer's data across all its applications.  The advantage to an organisation is by complying with privacy regulations around personal data the organisation achieves a previously difficult corporate goal.

Many existing executive applications that organisations have can be made available to customers and members to enhance the user experience.  With payments, organisations give customers many ways to pay for goods and services.  The inverse is for customers and members to have many ways that they can pay. A Welcomer executive application can match the payment methods for the organisation with the methods available to the customer to the mutual benefit and convenience of both parties.

How it works

Data silos are regular data bases that are created with regular applications.  The silos are discrete and contained. The elements to be exposed from the data silo are defined by the owner of the silo deciding what data items representing what concept or object can be linked to other data silos.  A silo might define a name as a string of characters that can be broken into parts. It might say that if there is one part it is a given name, if there are two parts it is given name followed by surname.  This information is exposed to other silos and the other silos can decide to allow names they have in their data silo to be linked to an existing data silo. When name is actually linked by the person between two silos this automatically links to all other linked silos for the person.

Linked data silos transmit data across the links using messages.  Message passing is essential because this means connections are via data content not via indexes or location.  The receiving data silo decides independently and secretly what it will do with the received data. These rules are specified by the owner of the data silo.  The form of the rules are built into the Welcomer software and they specify what happens when the silo receives a message relating to the data elements in the silo.  Requests for data or to change data can be ignored or acted upon independently by each data silo.  

A rule might be that a request to return data will only be allowed if the person has been authenticated with two or more factors. Another rule could be that the application requesting access is in on an approved list and is from an approved organisation.

Once a link is established then it is remembered by its content.  That is, when the Welcomer software receives a message it checks to see if the content it receives can be decoded.  If it can it decodes it.  If it can't it passes the message on to another linked data silo. The total linked silos for any data item is normally small.

Control of access is held within the data silos and is distributed.  The sender of a message can also impose rules on access that can be enforced through the Welcomer software.  For example, a sender may require the receiver to have agreed to keep the data it receives to be used for defined purposes and to only transmit it to other parties via the Welcomer software.

In summary Welcomer is software that makes permanent connections across data silos and allows applications to access data securely and rapidly no matter where it is held. 

Relative advantage of Welcomer

Applications that perform executive functions abound.  An advantage of the Welcomer approach is that it decouples existing operational applications from executive applications and provides a standard approach to the collation of data. Welcomer is introduced into an organisation without the need to change existing applications and processes.  New applications can be introduced with the ability to get data from existing applications.  Operational applications can continue to be developed and deployed and to concentrate on the task required. That is, like the prefrontal cortex we separate executive functions from operational functions.

Other strategies to achieve the same goals of executive applications are:

  • centralise and use a common suite of applications from a single vendor or from vendors that use the same software platform.
  • use different vendors and to use standard interfaces between applications.
  • provide translation gateways between systems

Advantages of the Welcomer approach is that it is leaves existing systems intact, it is scalable, simpler and easier to change.  Welcomer does not replace other approaches.  Rather it provides another dimension to enhance the capabilities of existing and future applications.

BlockChain is promoted as offering a solution to the connection problem across different data bases by providing a distributed common index or distributed ledger. Welcomer's underlying structure of providing permanent links to "the same" data elements is a very flexible, simple distributed ledger. 

Welcomer can be built on top of systems like the Inter Planetary File System (IPFS) as they both access data by content.

Outcomes of Connecting Data by content

The underlying communication structure along with the ability to specify enforceable rules as executive functions means that existing applications are more effective in the following ways.

  • data access permissions are enforceable.
  • distributed ledger systems are easy to deploy and operate.
  • data can be safely replicated and stored in multiple places to reduce transmission costs.
  • backup of data can be achieved by storing data in multiple places.
  • data does not get "lost" because a hyperlink is broken or no longer active.
  • secure private communications channels are easily deployed.
  • applications can provide highly secure authentication processes for transactions
  • individuals have access to personal data in data silos held by organisations

Distributed ledger systems have many applications.  Amongst them are

  • control land titles transfers
  • allow secret anonymous voting
  • allow collection of verified unidentified data
  • individuals and organisations issuing their own credit


BlockChains and Welcomer

The article "BlockChain a new economic model" describes how a Distributed Consensus Ledger can create new economic opportunities and new economic models.  

BlockChains are a method of implementing Distributed Ledgers. Welcomer is another way.  A Distributed Ledger is a way of having the same identifier across different data silos for the same Identity.  BlockChains do this by starting with a token for an Identity in any data silo and distributing the same token to other silos in ways that cannot be changed.

Welcomer creates a Distributed Ledger by creating a Network of interconnections where Identity emerges from the interconnections. With Welcomer data silos all have different identifier tokens for an Identity be it a person or company or house or program or money token.  A unique Identity emerges from the Welcomer connections and the activity over the connections.

The Identity created with Welcomer connections is a richer more complex structure than the BlockChain data structure.  Importantly it closely models the real world of Identities and the relationships between Identities.

Welcomer fits seamlessly into the existing structure of online commerce and online information. BlockChains are a new technology and will require retooling of existing systems for its wide deployment.  Welcomer uses existing technologies and can be introduced incrementally into systems.

Distributed Consensus Ledgers have wide uses.  Welcomer creates a Distributed Ledger as by-product of creating a distributed Identity from silos of information.  Welcomer does not need consensus and the complexities this entails.

Using a Brain analogy Blockchains are part of a electronic Reptilian reactive brain.  Welcomer adds a Mammalian Prefrontal Cortex to the Reptilian Brain so that Welcomer can examine itself and improve.

Welcomer enables all the economic possibilities of BlockChains plus a whole lot more.

Single Customer View through Direct Connections – the pathway to a more streamlined online experience

Kevin Cox and Lisa Schutz

The Problem – Billions In Misdirected Activity

The problem of not having a clear view of customers across organisations or groups of organisations such as Government departments, costs billions a year. The cost occurs in three ways, the billions in misdirected services that could be saved if the right people had the right information. The costs can be human lives, time wasted and organisational costs in cleaning up data and cleaning up messes after data gets mismatched. Single Customer View is in unfortunate phrase, because it misspecifies what is at stake here – nothing less than getting the job done right (whatever it is) first time.

Traditional Indexes Take Effort

Traditional approaches to achieve a single view of the customer relationship (or the citizen) work on very old school models of database management. Think of a library with an old fashioned card based catalog. If you go to a journal and want to read one of the citations you have to physically go back to the central catalog and if lucky, and it is well maintained, then it will direct you to where the citation reference is physically housed. The central catalogs are typically held at the library.  You can then go to the new location and retrieve the citation. If the location is a library that you cannot physically visit you contact the library and get an inter-library loan. Ouch. Of course anyone under 35 will probably have no idea about card catalogs. Go look them up and then ponder why in these advanced times we basically use database approaches that are about as inefficient and clunky as that!

Create a different structure - Links to similar information

So what is the alternative? Think of how you might browse journals these days. You expect a citation to have a hyperlink embedded within it. It’s like being on the equivalent of a magic carpet, being transported from citation to citation. The hyperlinks contain the knowledge of where the journal referenced is stored. In other words, hyperlinks are links in context to other relevant material.

Abstracting from this very old world analogy, what we are talking about is essentially linking of relevant ideas. And, if we set the hyperlinks up to link by content rather than location then we can create a structure that makes hyperlinks far more efficient.

If you can retrieve relevant data efficiently, then of course you can assemble information to any purpose coherently. From single customer view to identification and so forth.

So technically, what does this look like?

With Welcomer, database records have additional hyperlinks that connect similar data items for the same person in two other databases.  The databases are created by different applications and organisations who are charged with only releasing information to authorised people.  As the data is about the person and it is linked to the same data the person is authorised to access it. The application the person is using to access the data has also been authorised by the organisation responsible for the data.

This gives rise to the data structure.


This structure is different to an index structure.  It is an abstraction that creates a set of links that we can consider as a whole. Instead of thinking about each individual elements across all the databases as separate elements we can label the polygon ABCDE as the NAME of a person.  The value of the NAME will vary across the different databases, but the whole polygon is still considered a NAME.  In one database it might be Lisa, in another Lisa Schutz, in another it is misspelled as liza Schutz, in another misspelling it is Lisa Schultz and in another the NAME is Schutz.

These databases will have other information in them.  In A, B and C we might have ADDRESS.  In D and E we might have SALARY.  ADDRESS and SALARY will have their own polygons of connections to other databases.

If these data elements are connected for the same person it can be visualised as a set of polygons that together create a polyhedron. 


This data structure gives us an abstraction to reason with and implement as program code. The approach is a general approach to connecting distributed data and brings structure and modelling to hyperlinked systems.

It can work for any real world thing be it information about people, cars, houses. Whatever.

Making It Real – Keeping Addresses Up To Date

The unfortunate fact for IT teams is that people change address. As a result of this unfortunate fact, a high proportion of address data is today out of date and inconsistent because people tend to not notify every organisation or database they interact with of a change of address.

A "home address" can be a data element within a Welcomer connected database and all the "home addresses" can be linked by a Welcomer polygon.  Each of the "home addresses" reside in their own, respective databases which might include Government tax records, drivers registration, insurance policies and doctors records. The "home address" might be the place the person lives, the place to deliver parcels, or the place they lived when the transaction that created the "home address" was made.  At each database the rules for changing the "home address" are specified and defined as part of the local database.  When any "home address" in any of the databases changes then a message can be sent along the polygon path with the changes and each database makes its own decision using predefined rules on whether or not to update the "home address".

Importantly the decision made on address is a characteristic of each database and the decision for each database is made asynchronously and independently of any other database.  No other database knows if any of the other databases changed the address.  All they know is that somewhere the address was changed.  In other words we have a distributed database function where each distributed component acts independently. This requires no coordination across databases.  It means that when data is changed in one database the same data held in other databases is changed, if required, but only at the time needed.

Making it Real - Verification of Identity

In many societies Electronic Proof of Identity is provided by a person proving they have possession of documents that show they are uniquely identified by organisations.  Examples are an ID card, a driver's license, a passport, a relationship with a financial organisation. When a person wishes to uniquely identify themselves to another organisation they use one or more of these documents to prove their uniqueness. The organisation gives them another unique id so the organisation can retrieve information they store about the person when the person identifies themselves to the organisation.

With Welcomer Enabled databases a similar process is used except that the unique ids established with each connected organisation are used to establish new unique ids for other organisations. This contrasts with using the same physical ids for each new organisation.

A person goes to the new organisation (let’s call it bank D). They assert that they have a prior relationship with Organisations B, C and E. They give permission for D to use its already established organisational links with B, C and E to check if they do have those relationships and if so, then D creates a unique ID that is only known to D. And, in turn the person takes their unique ID for the organisation and adds it to their personal polyhedron. Now, if the person goes to another participating organisation, they can assert that they have unique IDs with B,C,E and D.

A Welcomer ID is an emergent property of the connected polygon of unique ids created for Welcomer enabled databases. A personal ID polyhedron if you will.

The advantage of this method is that like the internet, as long as there is an intermediate linkage(s) no two organisations need to be directly connected. And, the raw data typically used for verification of identity can be used as a secondary confirmation. The approach relies on the fact that all entities have allocated unique IDs to other entities to which they are connected and the fact that the unique ids are connected through a unique ID polygon.


In the diagram the entities A, B, C, D and E are represented by bubbles. A is a person. B,C,D and E are organisations. The lines represent connections between entities.

To take you through the example above, again, more technically, now that A is entering into a banking relationship with Bank D, the link AD is to be added. For AD to be created the rule specified by D is that there be 3 confirmations that A is unique with three other trusted entities. The person A also sets the rule that it wants D to be uniquely identified by three other entities.  A sends a message around its polygon of unique ids and asks if at each entity there is a relationship with D. D sends a message around its polygon of unique ids and finds the number of unique instances of A.  

If both A and D establish that the other has three instances from the messages sent back to them then the link AD is established by each putting its unique id for the party in its own polygon of unique ids.

It is important that the ID that A gives D is only known to A and the ID that D gives to A is only known to D.  Public key pairs for identity overcome the problem of passing unique IDs around the system.

The approach works for other entities such as companies, trusts, not for profits, etc. as A, B, C, D, or E can be any type of entity.  The system also works for other things such as a motor car, a house, a computer, or any object in the Internet of Things. The graph of interconnecting identities is not just for people. It means that any type of entity can act as verifying identity for any other entity. This means if a person owns a house and the ownership of the house is recorded in a Welcomer connected database the house can be used to verify the identity of the house owner.

The number of unique IDs can vary with the confidence required.  A newborn baby may initially be linked to just one parent, then other people and other organisations can be linked.

The database of common identity information and links to other unique ID polygons for an individual will initially be stored in Verification of Identity application databases but there is no logical reason why every person could not move their database of connections and ID information and store it on their own phone where the phone becomes the electronic representation of the person.

A person can have many separate polygons of unique IDs.  It may be that a person wants to have different IDs for social connections, for family connections and for business connections. All these can be kept independent and separate from each other.

Verification of identity is a continuous process.  Each new verification strengthens the trust in the existing verifications.  It means that anomalies in behaviour can be detected and confirmation of identity can be required if fraud is suspected.

This form of identity verification can be deployed rapidly in any society by making existing large databases of personal data Welcomer Enabled.  If births, deaths and marriages, passports, social security IDs, tax records, drivers licenses, visas, bank accounts were Welcomer Enabled then this would be sufficient to cover most people.  People could create their polygons of unique ids the first time they accessed any services by any of these organisations. They could later use their official identities to create other trusted identities for other purposes.

Making it Real with Biometric Authentication

Biometrics are a convenient simple way for a physical person to be authenticated (linked to a computer record).  Assume Welcomer is used to verify the identity of a person and voice authentication is wanted to confirm it is the same person using the verified identity.  To do this a voice print can be part of each unique id record.  When a new connection is made a voice print for the person can be combined with all the other existing voice prints across all the unique ids. The new connection is not made unless the voice print, or an alternative authentication, is established.  Each time the voice is used in any application it strengthens the combined voice print across all applications.

Making it Real with Programs

The Welcomer links are not synchronous in the sense of database records where all the data elements in a record are retrieved when a record is read.  Welcomer communication is achieved by passing messages along links and communication is asynchronous.  The system addresses records by content rather than by location.  An index or ID for a record asks for a particular data record to be retrieved.  With Welcomer messages a value of a data element is retrieved by broadcasting a message along connected links and asking for the record with given content to respond. This approach is implemented with reactive programming technology where the Open Source Welcomer software provides APIs to applications to add a new data link and to retrieve data element values via these data links.

Linking database records with the same data elements reduces the potential hyperlink address space and makes the approach scalable, relatively easy to program and makes it possible to optimise performance.

Welcomer is implemented with the use of Akka programming framework from TypeSafe.

Making it Real by Connecting Applications

Organisations control the reuse of data by defining the data elements linked across applications.  Applications call Welcomer APIs to access the data and the data linkages are defined when the applications are deployed.  Welcomer publishes a list of APIs, and a list of data element definitions in Welcomer Enabled applications.  The organisation deploying an application specifies the data elements it thinks are the same in other applications.  

Rules on access to data are defined in Welcomer and reside in each Data Silo.  These rules are private to the Data Silo who may or may not release the rules.

For example, a person may not be able to access data unless the person has had their identity verified in three other places, and they have authenticated themselves with two factors such as Voice Print and email address.

An organisation may not wish other specified organisations to view data and/or may want to  only allow specific organisations to view data.

Another rule may be to know the data exists but not to be able to access it.

To Know More

Welcomer is committed to deploying the connections with open source software and invites anyone to contribute to and use the code. Using it in applications will enable the data stored and accessed by the application available to any other application that uses the code.  Using the approach within an organisation will save much money and remove the need for other approaches to providing a single customer view as an alternative or in parallel to Single Signon.  Used widely the software will reduce the cost of building and operating computer systems by billions of dollars each year. To know more about the approach and see examples of Welcomer at work visit