AI/machine learning

Artificial Intelligence Converts Drawings to Data

Artificial intelligence systems can be trained to recognize visual content in drawings and provide a simplified context. The complete paper highlights the use of AI to process a scanned drawing and redrawing it on a digital platform.

Getty Images

Much data in the engineering world exists in the form of paper documents and drawings. Technically, these are considered unstructured data because extracting content from such drawings using traditional programs is highly resource intensive. However, artificial intelligence (AI) systems can be trained to recognize visual content in drawings and provide a simplified context. The complete paper emphasizes the use of AI in processing a scanned drawing and redrawing it on a digital platform. This approach can bring considerable advantages in achieving the goal of digital transformation.

Digitizing Paper Drawings

Traditionally, a process drafter who understands engineering domain and symbology is resourceful in drafting drawings. For AI to read the drawing, the program should develop a similar understanding of the standard symbology. AI can adapt the concepts of pattern recognition, text recognition, and line-segment recognition to develop a model that learns to recognize components of an engineering drawing.

Pattern Recognition. This term refers to the automated recognition of patterns and regularities in data. Pattern detection when applied to images deals with identifying occurrences of similar visual data of a certain class (such as humans, buildings, or cars) in digital images and videos. In the case of a drawing, the pattern could be a symbol, text, or line, with the data comprising all the pixels of a drawing. A well-trained algorithm could be used to perform visual recognition of engineering drawings. 

Symbols widely used in engineering drawings act as an input to the algorithm. Several examples of symbol patterns are analyzed by the AI. After a few iterations, AI develops the ability to correlate the graphical pattern on the drawing with the corresponding symbol type. By analysis of symbol-forming pixels and their corresponding locations, AI can locate the presence of a symbol within the drawing.

Line Recognition. Lines in a drawing define the flow of the process and instrumentation diagram (PID). A line, unlike a symbol, does not have a defined shape. Therefore, finding the edges of a line requires a different approach. Several examples of marked lines need to be provided such that AI develops an understanding of a line. With this training, AI can recognize the lines along with edges on the drawing. This line-coordinate information can later be used in recreating lines on a digital platform. Other details, such as the length of the line or the components on a line, can also be obtained. 

Text Recognition. Reading the text content in a drawing is equally important. The text on a drawing, such as tag numbers, notes, and holds, provides context to a drawing. Image recognition holds no value if a tag number cannot be correlated with the corresponding symbol. Optical character recognition/reader (OCR) is the mechanical or electronic conversion of images of typed, handwritten, or printed text into machine-encoded text, whether from a scanned document or a photo of a document. The location of the text and the content are the two components of text extraction. The precise location of the text on the scanned image can be correlated with the image recognition to add metadata to the drawing. The digitized files can then be electronically edited, searched, and stored more efficiently. There are several machine-learning (ML)-based OCR methods that could be used to extract text from a scanned drawing.

Using natural language processing (NLP), the text can be filtered to obtain components that adhere to a regular expression. A regular expression is a specialized notation for describing patterns that are to be matched. NLP helps to filter the text based on the pattern.

Drawing Recreation. Manual efforts to recreate a drawing could be eliminated to a significant extent using AI. Figs. 1 and 2 show the process of recreating a drawing manually and automatically, respectively. Using AI, the drawing is preprocessed to remove noise and unwanted data from the image. This drawing is now fed into the pattern-recognition algorithm. In this step, the algorithm recognizes the symbols and generates an output with the respective metadata. Likewise, the drawing is fed into the line-recognition and text-recognition algorithms to identify the position of the lines and texts in the drawing.

Fig. 1—The manual drawing-recognition process.
Fig. 2—AI-based drawing-recognition process.

AI collects metadata after processing the scanned drawing. These data of symbol types and their locations can then be used by the designer to place the symbols in the digital version of the drawing programmatically. Similarly, lines and text could be placed in their respective locations. Because all the symbols, lines, and the text are placed using batch operations, the effort and the time to recreate a drawing decrease significantly.


Manual redrafting is time consuming and expensive. For example, two to three days can be spent recreating a drawing on a digital platform. With AI, the paper drawing can be recreated in few minutes, thus saving at least 50% of manual effort.

AI also has the capability to read handwritten comments and sketch markups, allowing the designer to automate such designated changes. For example, in a case in which one must delete all instruments that are circled in green, AI can recognize those instruments and help automate the deletion.

It takes an average of 25 hours to convert a single drawing. Even if one could reduce the digitizing effort by 50%, with a man-hour rate of $25, the savings per drawing could be $300. For a project with 3,000 drawings, savings could be $900,000. This savings will scale up exponentially depending on project size.

Furthermore, it is also widespread practice to initiate front-end engineering and design with document-driven, instead of database-driven, software. The output from the document-driven software is moved to a database during the detail-design phase. AI-based drawing conversions can aid here as well.


Currently, image-recognition techniques might work well for piping and instrumentation diagrams (PIDs) and process-flow diagrams, but complex drawings such as those seen in structural, electrical, and architectural applications have overlapping graphics that complicate the isolation and extraction of information. Recognition of curved lines is a challenge as well. However, with advances in deep learning, it should be possible to recognize overlapping graphics and to classify every pixel in a drawing to that of a component.

The quality of paper scans also is critical for output. If scans have a poor pixel density per inch, it may be difficult to identify text and lines.

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper OTC 29358, “Engineering Data Management Using Artificial Intelligence,” by Kalicharan Mahasivabhattu, Deepti Bandi, Shubham Kumar Singh, and Pankaj Kumar, WorleyParsons, prepared for the 2019 Offshore Technology Conference, Houston, 6–9 May. The paper has not been peer reviewed.