Import 3D models into Unity and pitfalls

    This is the third article in our series on working with 3D models in Unity. Previous articles: “Features of working with Mesh in Unity” and “Unity: procedural editing of Mesh” .

    In the world of computer graphics, there are many formats for presenting 3D models. Some of them are positioned as universal, others as optimized for specific tasks or platforms. In any field, they dream of working with a universal format, but reality tells us no. Moreover, because of such a zoo, a vicious circle is created: developers of “universal” tools come up with their own internal formats to generalize the previous ones, increasing the population and creating means for converting formats. So there is a problem of data loss or distortion during conversion. The problem is as old as the world (IT world, of course), and it did not bypass the import of models into Unity .

    In this article, we will talk about some of the difficulties that you encounter when working with models inUnity (the features of the functioning of ModelImporter , the difference in the representations of 3D objects, etc.), as well as what tools we created to overcome these difficulties.



    Features of ModelImporter


    Recall that for the video card API the minimum and only three-dimensional primitive is a triangle, while the geometry in FBX , for example, can be represented as quadrangles. Modern 3D-packages for creating models, as a rule, allow different levels of abstraction, but even there the rendering of the result occurs through triangles.

    At the same time, many tools are geared towards working with quadrangles, which encourages 3D artists to use this primitive as the main one. In such cases, TK often requires triangulating the model before implementation. If triangulation is not done, the corresponding Unity modulein standard mode, it executes it automatically when a file is added. Because of this, errors appear, since triangulation algorithms are implemented differently in different packages. When choosing a diagonal to split a quadrangle, ambiguity arises, hence most of the problems that can be divided into two groups.

    The first is related to the correctness of the display of the model form. So, the shape of a non-planar quadrangle directly depends on the choice of diagonal.


    Susanne triangulated in Blender (Quad Method: Beauty) and Unity (automatically upon import)

    In addition, the normal map baking algorithm uses split data, due to which the difference in triangulation can generate artifacts in the form of a cross on a glare.


    The scooter of a healthy person and the scooter of a smoker. The

    problems of the second group are found in a texture scan. For example, we have a quadrangle with an angle that is obtuse enough for an error to occur. When previewed in a 3D package, it is broken by one of the diagonals into two completely folding triangles.


    Original polygon


    Polygon triangulated in Blender

    However, after importing into the project, it is discovered that this quadrangle is broken by another diagonal and that one of the triangles is either generally degenerate or close to that.


    A polygon in Unity with a triangle close to a degenerate (the right triangle is almost indistinguishable from a segment)

    The reason for the problems associated with the degeneracy of polygons is the errors in floating point calculations, as well as the peculiarities of pixel interpolation during rendering. What the hell happens with such triangles: they twitch, each frame changes color. The extremely small cross-sectional dimension creates difficulties in processing light, which is why parts of dynamic objects may flicker. And in the non-determinism of baking the lighting map, there is also nothing good.

    I am a 3D package, as I see it




    In 3D modeling, there is often a difference between the actual number of vertices and their number in a 3D package. The essence of the problem lies in the information that is required for processing by the video card. The data structure for the vertex is predefined and includes the position, normal, tangent, texture scan coordinates for each channel and color. That is, two normals cannot be pushed into one vertex.

    For some artists, it is not always obvious that the peak is determined not only by its position. Modelers are well aware of the concepts of Hard / Soft Edges and UV Seams , but not everyone understands how they are implemented programmatically. In addition, 3D packets are confusing, which in the standard mode show the number of vertices as the number of unique positions.

    So, the usual Cube primitive is geometrically represented by 8 vertices. However, in order to correctly transmit the reflection of light from each face and to correctly apply the texture, in each corner of the cube, 3 vertices with the same position, but different normals and texture coordinates are necessary, since 3 edges converge in each corner. A small block of documentation was dedicated to this moment . There you can see examples.


    Metrics cube in Blender


    metrics cube in Unity

    Enough tolerating this!


    Faced with these and similar problems, we decided to create a model analysis and validation tool when importing into a Unity project . In other words, a custom validator, which on the request "Eat!" answer: “I will not! Redo ”, - or spit out sets of warnings and values ​​of various parameters, notifying that something is not good for him.

    For analysis and verification, we developed the following functionality:

    • counting the number of unique positions of vertices, colored vertices, Hard Edges , UV Seams ;
    • Calculation of the Axis-Aligned Bounding Box (AABB) and its center;
    • determination of the output of the coordinates of UV- scan beyond the range of 0.0–1.0;
    • texture overlay definition;
    • checking the texture scan for the adequacy of the specified pixel indentation for a given texture resolution.

    What does this give us?

    Counting the number of unique vertex positions, Hard Edges, UV Seams and colored vertices is necessary to verify that the artist’s model was imported to Unity . This functionality also allows you to monitor compliance with the requirements for optimizing the model (for example, so that the number of vertices does not exceed a certain value). Due to the same peculiarity of 3D packages, which in fact show the number of unique positions, there are cases when the metric of the number of vertices in the model editor satisfies this restriction, but after adding the file to the project, it may turn out that this is not so.

    Calculation of AABB and its center- allows you to determine the displacement of the model relative to the beginning of its own coordinate system. This is necessary for the predictable positioning of assets that are initialized in the scene while the application is running. So, the AABB of the building should have minY = 0 in a good way, and some chandelier that is attached to the ceiling should have maxY = 0.







    The coordinates of the vertices of the UV-scan are outside the range 0.0–1.0 in most cases (for example, if the texture is tiled on the model) is provided. Often this approach is used to represent in the scene a multitude of low-detailed small objects (vegetation) and / or located in the distance, as well as tiling large homogeneous objects (buildings). In the case of tiling, the coordinate values ​​of a specific UV-channels just trim the integer part at the shader level if the Wrap Mode texture is set to Repeat .

    Imagine now that you laid the texture in a satin (and covered with a blanket: 3). Already converted coordinates corresponding to the atlas (x * scale + offset) will come to the shader. This time, most likely, there will be no whole part and there will be nothing to crop, and the model will climb onto someone else's texture (the blanket turned out to be small). This problem is solved in two ways.

    The first assumes that you cut the integer part at the sweep coordinates in advance. In this case, there is a chance of overlapping polygons, which we will discuss below.

    The second is based on the fact that texture tiling is inherently an optimization method. Nobody forbids you to increase the size and sample the desired piece for the entire model. However, in this way the usable space of the atlas will be used inefficiently.



    Overlays in a texture scan are also often not random: they are needed to effectively use texture areas. It happens that a novice makes a mistake, a senior comrade sees it, pronounces a strong word, and a novice does not do so anymore. But it happens that the overlay is so small and located in such an unexpected place that the senior comrade may not notice it.

    In the experimental team, errors that were undetected at the base scan get into the project a little more often than never. Another thing is when the conditions for using ready-made content change.

    Example. We worked with a set of models for dynamic objects in the game. Since there was no problem baking light for them, overlays were allowed in the UV scan.


    An example of a basic UV- scan with overlays (shown in red)

    However, then we decided not to use these models as dynamic ones, but to place them as a static decor on a level. For optimization, as you know, the lighting of static objects in a scene is baked in a special atlas. Separate UV2 channel for lighting mapthese models did not have, and the quality of the automatic generator in Unity did not suit us, so we decided to use the basic texture scan for baking as often as possible.

    Here, there were obvious problems with the correctness of lighting. Obviously, the rays entering a statue in the eye should not create glare at the fifth point on the back of the head.


    Incorrectly baked model lighting (on the left) and corrected (on the right)

    Unity, when creating a lighting map , first of all tries to use the UV2 channel. If it is empty, then the main UV is used , if this one is empty, then excuse me, but this is an exception. Bake models in the lighting mapWithout pre-prepared UV2 in Unity, there are two ways.

    As the first, Unity offers automatic UV2 generation based on model geometry. This is faster than doing it manually, in addition, this tool can be configured using several parameters. But even despite this, the resulting overlap of chiaroscuro is often unsatisfactory for highly detailed objects due to seams and leakages in the wrong places, and the packaging of parts of such a sweep is not the most effective.



    The second way is to use basic UV- development for baking. A very attractive option, because when working with one texture scan, there is less chance of making an error than when working with two. For this reason, we try to minimize the number of models that have overlays in the base UV . Created tools help us to do this.

    Checking the texture scan for the adequacy of the specified pixel indentation at a given texture resolution is a more accurate UV validation based on rasterization. More on this method will be described in the next article in the series.

    Summarize. Of course, it is almost impossible to track all the nuances: sometimes you have to put up with the imperfection of the result in order to complete the task on time. However, the identification of even a part of such shortcomings allows accelerating the development of the project and improving its quality.

    Also popular now: