Experience porting an application from Unity3D to iOS sdk and SceneKit

Today we share the experience of our partners, Try Sports Now, about how to give a second life to the application using the SceneKit framework .

“It happens that an application that has been vegetating for a long time in obscurity, suddenly begins to gain popularity among users and make a profit. It goes without saying that in this situation it is advisable to develop and update it. One thing is bad: it may turn out that the source code of the product is so morally obsolete during the time of lack of demand that the time spent on updating it is comparable to the resource that goes into developing a new source from scratch. We encountered a similar problem when working with the Human Anatomy 3D project.. In this article, we will describe how the transition of the new version of the application from Unity3D sources to the native ones was carried out, and highlight some of the problems that arose in the process.

The application was written several years ago on the cross-platform Unity3D. After some time, cross-platforming became irrelevant, and the version for the App Store continued to occupy a lot of space on users' devices, which could not but disappoint them. And the capabilities of the Unity3D engine were redundant for the implementation of the functionality: you just had to display 3D objects with the possibility of basic interaction with them and animation. In view of the above, we decided to update the version with native iOS and macOS sources, and use SceneKit to implement modules for working with 3D objects. The decision was dictated mainly by the fact that we are familiar with this tool.

SceneKit Short Review


SceneKit is a high-level framework designed for working with three-dimensional objects. It includes a physics engine, particle generator, and a handy API. SceneKit is based on a scene graph. This approach is widely used in game engines (Unity, Cocos ...), as well as in 3D editors (Blender, Lightwave ...). You can read more about the scene graph here . Consider its typical structure:



The main element of this structure is the node (SCNNode), which contains information about the position, rotation angles and scale. The position, rotation, and scale of the child nodes are determined relative to the parent node.

  • SCNGeometry defines the shape of an object and its display through a set of materials;
  • SCNCamera is responsible for the point from which we see the scene (similar to the camera when shooting a movie);
  • SCNLight is responsible for lighting; its position affects all objects in the scene.

As you can see from the structure, the root object is SCNScene, which contains a link to the root node. It is with respect to this node that the entire structure of the scene is built.

Now let's see how it was used in our project. First, a scene object (SCNScene) was created, and a finished scene with a model was loaded into it. To store scenes and textures, the recommended .scnassets folder was used. since in this case, Xcode optimizes them to achieve maximum performance on the device. We imported the scene from the editor in the COLLADA format (* .dae). Here is an example of loading a scene from .scnassets:

Objective-C:

SCNScene *scene = [SCNScene sceneNamed:@"путь до файла со сценой"];

Swift:

let scene = SCNScene(named: "путь до файла со сценой")

As a result, the entire hierarchy of objects of the loaded scene will be added to the root node of the scene (rootNode). After that, we decided that it would be nice to find our model on stage and add material with a texture to it, which is an image and is stored in the same .scnassets:

Objective-C iOS:
SCNNode *meshNode = [scene.rootNode childNodeWithName:@"имя узла с моделью" recursively:true];
meshNode.geometry.materials.firstObject.diffuse.contents = [UIImage imageNamed:@"имя файла с текстурой"];

Swift macOS:

        let meshNode: SCNNode? = mydae.rootNode.childNode(withName: "имя узла с моделью", recursively: true)
        meshNode?.geometry?.materials.first?.diffuse.contents = NSImage(named: "имя файла с текстурой")

As you can see, the search for the desired node is carried out simply by name. The recursively parameter determines whether to search farther, deeper into the scene graph, or just restrict ourselves to the child nodes of the current one. Having found the node we need, we take the first material that lies on it and assign it the selected texture.

Next, you need to enter the camera. To do this, create and add 2 nodes to the scene hierarchy: cameraNode, which will be the camera, and cameraRoot, to which the camera will be attached and relative to which it will move:

Objective-C iOS:

SCNNode *cameraNode = [SCNNode node];
cameraNode.camera = [SCNCamera camera];
SCNNode * cameraRoot = [SCNNode node];
cameraNode.position = SCNVector3Make(0, 0, 3);
[cameraRoot addChildNode:cameraNode];
[scene.rootNode addChildNode:cameraRoot];


Swift macOS:
        let cameraNode = SCNNode()
        cameraNode.camera = SCNCamera()
        let cameraRoot = SCNNode()
        cameraNode.position = SCNVector3Make(0, 0, 3)
        cameraRoot.addChildNode(cameraNode)
        scene.rootNode.addChildNode(cameraRoot)

This is done for the convenience of working with the camera, since in this case all movements will be carried out relative to a fixed point, and this is more convenient for those cases when, for example, the camera should fly around the object.

To display the scene, the SceneKit View interface element is used, which is added to the application screen:


Since there is no need for specific lighting within our project, we will not add light sources to the stage. Instead, we’ll use the default lighting. The autoenablesDefaultLighting parameter of SCNView is responsible for it. All that remains is to tell the SCNView component our scene:

_sceneView.scene = scene;

SceneKit has a built-in camera control mechanism, for which the allowsCameraControl parameter is responsible. However, in our case, it had to be turned off, since it was necessary to provide for the limitation of the amplitude of rotation of the camera around the object.

An example of working with objects


As an example of working with scene objects, consider the implementation of the zoom and camera rotation around our model.

Let's start with the implementation of approximation / removal, which, in essence, comes down to moving the camera node along the Z axis relative to the node of the parent object. All movements, rotations, zooms in SceneKit are carried out in the same way as in CoreGraphics. This is because SceneKit uses CoreGraphics to implement animation:

Objective-C iOS:

- (void)zoomCamera:(CGFloat) zoom{
    CGFloat speedZ = 0,245;
    if (zoom< 1)
        zoom= speedZ;
    else if (zoom> 1)
        zoom= -speedZ;
    if (zoom< 0){
        if (cameraNode.position.z < 2.5)
            return;
    }
    else{
        if (cameraNode.position.z > 6)
            return;
    }
    cameraNode.transform = SCNMatrix4Translate(cameraNode.transform, 0, 0, zoom);
}

Swift macOS:

var allZoom: CGFloat = 0
    func zoomCamera(zoom: CGFloat) {
        if allScrollX < 1 {
            let cam = ViewController.cameraRoot.childNodes[0]
            allZoom = zoom
            let speedZ = CGFloat(7)
            if allZoom < -speedZ {
                allZoom = -speedZ
            } else if allZoom > speedZ {
                allZoom = speedZ
            }
            if  allZoom < 0 {
                if cam.position.z < 3 {
                    return
                }
            } else {
                if cam.position.z > 5 {
                    return
                }
            }
            cam.transform = SCNMatrix4Translate(cam.transform, 0, 0, allZoom * 0.035)
        }
    }

In this function, we determine what exactly is happening - approaching or moving away - and if the current position of the camera falls within the set limit, we move the camera using SCNMatrix4Translate, which takes transform as the first parameter - it will move relative to it. The remaining parameters are displacements along the X, Y, Z axes, respectively. When implementing zoom on macOS, it should be borne in mind that the scrolling speed of the Apple Mouse and TouchPad is higher than standard indicators: this may cause a violation of the limitation of the extreme boundaries of the zoom.

The implementation of the rotation of the camera around the model is largely similar to the implementation of the approximation, but in the case of rotation, of course, we are dealing with 2 axes at once: the Y axis for flying around the model and the X axis for viewing the model from above / below:

Objective-C iOS:

- (void)rotateCamera:(CGPoint) rotation{
    allScrollX = rotation.x / 2;
    for (SCNNode *item in _sceneView.scene.rootNode.childNodes) {
        if (item != cameraRoot && item != cameraNode)
            item.transform = SCNMatrix4Rotate(item.transform, rotation.x * M_PI/180.0, 0, 0.1, 0);
    }
    if (cameraRoot.eulerAngles.x* 180.0/M_PI > 45 && rotation.y < 0){
        return;
    }
    if (cameraRoot.eulerAngles.x* 180.0/M_PI < -45 && rotation.y > 0){
        return;
    }
    cameraRoot.transform = SCNMatrix4Rotate(cameraRoot.transform, rotation.y * M_PI/180.0, -0.1, 0, 0);
}

Swift macOS:

func anglCamera(x: CGFloat, y: CGFloat) {
        if allZoom < 1 {
            allScrollX = x / 2
            let cam = ViewController.cameraRoot
            let rootNode = ViewController.sceneRoot
            let dX = x * CGFloat(Double.pi / 180)
            for item in rootNode.childNodes {
                if item.name != "camera" {
                    item.transform = SCNMatrix4Rotate(item.transform, -dX, 0, 0.1, 0)
                }
            }
            let angle = cam.eulerAngles
            let dY = y * CGFloat(Double.pi / 180)
            if  dY < 0 {
                if angle.x < -1 {
                    return
                }
            } else {
                if angle.x > 1 {
                    return
                }
            }
            cam.transform = SCNMatrix4Rotate(cam.transform, dY, 0.1, 0, 0)
        }
    }

Here we rotate the model itself along the Y axis so that rotation on each axis is carried out independently. The rotation itself is implemented through SCNMatrix4Rotate. Its first parameter is transform, with respect to which the rotation will be carried out, the second parameter is the rotation angle, and the other three are the rotation components along the X, Y and Z axis, respectively.

Problems


There were no problems with the new iOS version of the application, we managed to lower the minimum required version of iOS to 10.0. Difficulties began when a similar application needed to be implemented on macOS. On macOS versions below 10.12, when working with models, we ran into global problems:

  • animations were ignored;
  • coordinates of animations were shifted;
  • The probability of an XCode IDE crash when working with models increased.

These problems have not yet been resolved; I had to temporarily leave the minimum required version of macOS 10.12.

Conclusion


Porting the application from Unity3D to iOS sdk and SceneKit made it possible to simplify and speed up the development of the application interface, the implementation of gesture control. Without extra effort, the interfaces of iOS and macOS applications became as familiar to the user as possible, and the size of archive files decreased by 2-2.5 times. In general, when it comes to display requirements and basic interactions, SceneKit allowed us to integrate three-dimensional objects with animations on the latest versions of macOS and iOS without global complexity, as well as implement the simplest interaction with the camera. ”

Also popular now: