As 3D printers have become cheaper and more widely accessible, a growing community of beginning creators is making their own things. To do this, many of these amateur crafters access free and open source repositories of user-generated 3D models, which they download and build on their 3D printer.
But adding custom design elements to these models represents a significant challenge for many manufacturers, because it requires the use of complex and expensive computer-aided design (CAD) software, and is particularly difficult if the representation Original model is not available online. Additionally, even if a user is able to add custom elements to an object, ensuring that these customizations do not detract from the functionality of the object requires an additional level of domain expertise that many lack. beginning creators.
To help creators overcome these challenges, MIT researchers developed a generative AI-based tool that allows the user to add custom design elements to 3D models without compromising the functionality of the crafted objects. A designer could use this tool, called Style2Fab, to customize 3D models of objects using only natural language prompts to describe the desired design. The user could then manufacture the objects with a 3D printer.
“For someone with less experience, the main problem they face is: now that they have downloaded a template, as soon as they want to make changes to it, they are lost and don’t know what to do . Style2Fab would make it very easy to style and print a 3D model, but also to experiment and learn while doing it,” explains Faraz Faruqi, a graduate student in computer science and lead author of a book on paper Introducing Style2Fab.
Style2Fab is driven by deep learning algorithms that automatically divide the model into aesthetic and functional segments, streamlining the design process.
In addition to empowering beginning designers and making 3D printing more accessible, Style2Fab could also be used in the emerging field of medical manufacturing. Research has shown that considering the aesthetic and functional features of an assistive device increases the likelihood that a patient will use it, but clinicians and patients may not have the expertise to customize 3D printable models.
With Style2Fab, a user can customize the appearance of a thumb splint so that it blends in with their clothing without altering the functionality of the medical device, for example. Providing a user-friendly tool for the growing field of DIY assistive technologies was a major motivation for this work, Faruqi adds.
He wrote the paper with his advisor, co-senior author Stefanie Mueller, an associate professor in MIT’s departments of electrical engineering, computer science, and mechanical engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL ) who heads the HCI. Engineering Group; co-lead author Megan Hofmann, assistant professor in the Khoury College of Computer Sciences at Northeastern University; as well as other members and former members of the group. The research will be presented at the ACM Symposium on User Interface Software and Technologies.
Focus on functionality
Online filings, such as Thingiverseallow individuals to upload user-created open source digital design files of objects that others can download and make with a 3D printer.
Faruqi and his collaborators began this project by studying the objects available in these huge repositories to better understand the functionalities that exist within different 3D models. This would give them a better idea of how to use AI to segment models into functional and aesthetic components, he says.
“We quickly realized that the usefulness of a 3D model depends a lot on the context, like a vase that could sit flat on a table or hang from the ceiling on a string. So it can’t just be an AI that decides which part of the object is functional. We need a human in the know,” he says.
Based on this assessment, they defined two functionalities: external functionality, which involves parts of the model that interact with the outside world, and internal functionality, which involves parts of the model that must fit together after manufacturing.
A styling tool should preserve the geometry of external and internal functional segments while allowing customization of non-functional aesthetic segments.
But to do this, Style2Fab must determine which parts of a 3D model are functional. Using machine learning, the system analyzes the model topology to track the frequency of geometry changes, such as curves or angles connecting two planes. Based on this, it divides the model into a number of segments.
Then, Style2Fab compares these segments to a dataset created by the researchers that contains 294 3D object models, with the segments of each model annotated with functional or aesthetic labels. If a segment closely matches one of these parts, it is marked functional.
“But it is a very difficult problem to classify segments based on geometry alone, due to the huge variations in shared patterns. These segments therefore constitute a first set of recommendations which are presented to the user, who can very easily modify the classification of any segment into aesthetic or functional,” he explains.
The human in the loop
Once the user accepts the segmentation, they enter a natural language prompt describing the desired design elements, such as “a rough, multi-colored Chinese planter” or a phone case “in the style of Moroccan art.” An AI system, known as Text2Mesh, then tries to determine what a 3D model that meets the user’s criteria would look like.
It manipulates the aesthetic segments of the model in Style2Fab, adding texture and color or adjusting the shape, to make it as similar as possible. But functional segments are prohibited.
The researchers integrated all of these elements into the backend of a user interface that automatically segments and then styles a model based on a few clicks and user inputs.
They conducted a study of creators with a wide variety of 3D modeling experience levels and found that Style2Fab was useful in different ways depending on the creator’s expertise. Novice users were able to understand and use the interface to style designs, but it also provided fertile ground for experimentation with a low barrier to entry.
For power users, Style2Fab has helped speed up their workflows. Additionally, using some of its advanced options gave them more precise control over the stylizations.
In the future, Faruqi and his collaborators want to expand Style2Fab so that the system offers precise control over physical properties as well as geometry. For example, changing the shape of an object can change the force it can withstand, which could cause it to fail when manufactured. Additionally, they want to improve Style2Fab so that a user can generate their own custom 3D models from scratch within the system. The researchers are also collaborating with Google on a follow-up project.
This research was supported by the MIT-Google Program for Computing Innovation and used facilities provided by the MIT Center for Bits and Atoms.