Competitively priced BIM Modelling / Guaranteed quality / Fast turnaround

Competitively priced BIM Modelling / Guaranteed quality / Fast turnaround

Ethical AI in BIM: Transparency, Data Security, and Governance

Written by BIM Outsourcing
January 10, 2025

Intergration of BIM workflow with other technologies like AI, IoT, Digital Twin and cloud computing promise the exciting future of construction industry. But like any other technology and process, AI implementation results in new risks, potential misunderstanding, differing expectations and finally disputes. We therefore need to balance such innovative technologies with mitigating risks and unintended liability.

Ethical dilemmas of this game-changing duo of AI and BIM fall into the following categories:

  1. Copyright
  2. Confidentiality
  3. Reliance, Reliability and Responsibility
  4. Transparency
  5. Ethics and Bias

Copyright

Copyright law protects only human authored works, including those made with AI assistance, it does not extend to works generated entirely by AI. Privacy and data security must be ensured when AI is being used in BIM workflow.

There are two types of AI models: Private or closed AI Models and Public AI Models. Private AI models are only accessible to an organization and trained on data specified available in an organization’s record. ChatGPT is an example of public AI models, these types of models are trained on data collected from wide range of external sources. With public AI models, it is difficult to know if the data used for training has correct copyright permissions, which raises concerns about potential copyright issues.

In private systems, while there may be an assumption that copyright issues don’t exist as AI is being trained on organization’s own data, some of the data may be restricted for reuse or copying. Therefore, accidental copyright breaches must be avoided.

For public AI models, there are several ongoing cases in which writers, artists and other content creators have filed copyright claims, asserting their works were used to train AI models without their consent. It is still undefined whether the work is copyrighted, if someone is using generative AI to produce content that is unique in its characteristics.

Some companies, such as Adobe, Microsoft etc., are offering indemnities to reassure customers against potential copyright lawsuits related to the use of AI models. Although these indemnities are rare, particularly among software companies as their terms of use contain an extensive list for exclusion of liability. Their applications are yet to be tested, but at least they provide some reassurance that AI companies are aware of the potential legal risks and putting their effort to mitigate them. Same applies for BIM model developed with the aid of AI, first, not all of them are automatically copyrighted and secondly, their elements may be eligible for copyright protection depending on the creativity and uniqueness.

Confidentiality

Almost all of the construction contracts have terms and conditions dealing with confidentiality of information. Some contracts restrict the sharing of any sort of project data to any entity outside the project team while others are bit flexible and restrict data sharing outside the contracting party. Typically, these clauses allow for disclosure in specific situations, such as when data is made public or required to be disclosed by a court order.

Once the data is inserted into the public AI model, all the data is disclosed to the public without any restriction. The data that was used to train the model, can’t be removed or deleted realistically. AI results generated for other parties could be based on data inserted to train the model. In such a case, it would be a violation of confidentiality agreements and expose sensitive business information to external parties. There have been a number of headlines where employees have shared the confidential data of an organization, unknowingly. The reason behind such cases was lack of awareness of potential consequences of sharing that data. The solution to this problem is guiding the users about best practices and basic dos and don'ts.

Reliance, Reliability and Responsibility

The reliance and reliability of AI generated results in BIM is based on several factors like, quality of AI model, data used for training and specific application of AI. Generative AI is like a calculator, just a tool to facilitate and speed up results, not a full solution to automation in BIM workflow. Generative AI can sometimes completely make things up. This was illustrated by a lawyer who submitted six cases provided by ChatGPT in support of his argument, just to be told that they are completely fictional, and he was also penalized for this. We cannot completely rely on the results solely from AI. The better the data input into AI model to train it, the better will be the output.

Cutting a long story short, we can’t blindly rely on AI generated results whether it’s for a BIM workflow or not. Neither we can use it as an excuse for any errors or mistakes.

Transparency

AI models, particularly machine learning algorithms, can sometimes act as black boxes, making it difficult for users to understand the process and logic behind the solution. For ethical AI, it is critical for the system to be transparent, and stakeholders should be able to understand the rationale of design, construction and management processes. 

Ethics and Bias

Lack of transparency can lead to lack of trust in AI systems, and decisions made using AI could be questionable by stakeholders, especially if they lead to negative outcomes. Bias in AI systems occurs when the models or the data they are trained on reflect existing prejudices, leading to unfair or skewed outcomes. For instance, if an AI system is trained on past projects that have inappropriately used certain materials or design strategies, it may continue suggesting that same choices which were favored previously, even though they are not suitable in current scenario.

Sometime when data used to train AI algorithms is diverse, the algorithms can introduce bias themselves. For instance, AI systems might prioritize efficiency and speed while scheduling without taking social implications of worker welfare into account. It may suggest or include overworking laborers, causing stress in them. The output of every AI algorithm or application should be fair, unbiased and inclusive of all available data. Their decision-making frameworks should be multi-dimensional and balance the competing objectives, so no set of interest is favored over others.

Our office in the UK

Vinters Business Park, New Cut Road, Maidstone Kent, ME14 5NZ

Privacy Policy
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram