Experiments with AR: When C # Meets CSS
- Transfer

Often when working on a project, the greatest technical difficulties arise when you least expect it. In my case, this happened when I was working with Google Creative Lab on a prototype experiment to transfer Grace Vanderwall Moonlight to Augmented Reality. We liked the idea of surrounding the viewer with beautiful hand-written lyrics that will unfold and float in space when moving through it.
Our AR lyrics in the real world

I was a project coder; when I started the prototype, it seemed to me that the most difficult part would be AR. Arranging and maintaining a stable position of objects in AR is a task for a huge amount of complex calculations, right? In fact, thanks to ARCore , which took on the main difficulties, everything turned out to be quite trivial. However, with the creation of an animated handwriting effect in 3D space, everything was not so simple.
In this post, I will talk about some tricky hacks that I used to solve the problem of animating our two-dimensional handwritten text in 3D using the Unity LineRenderer component , which provides sufficient speed even in the presence of thousands of points.
Concept
When we thought about possible musical experiments that could be done in AR, we were intrigued by the popularity of the video clips with text . These videos with text, far from simple karaoke style, look amazing, and for their production sometimes it takes as much effort and aesthetic details as for other music videos. You may have seen the clip of Katy Perry Roar with emoji, or video by Taylor Swift in Saul Bass style design for a song the Look for What You Made Me the Do .
We stumbled upon Grace Vanderwall text videosand we liked their style. In such videos, her handwritten text is often used, so we wondered: can we create a video with handwritten text, but at the same time embed it in AR? What if the song itself writes itself in parallel with singing around the viewer - but not with images, but with graceful, soaring, similar to three-dimensional lines?
We instantly fell in love with this concept and decided to try to implement it.
Getting Points
To begin with, I analyzed the data that we will need to simulate the effect of writing text by hand.
We didn’t just want to place the images with the text in the scene - we needed the text to give physical sensations when the viewer can walk around him and watch him. If we simply placed 2D images in 3D space, when viewed from the side, they would disappear.

Top left and right: PNG image in a Unity scene. It looks normal until you start moving around it. If you look at it from the side, it practically disappears. Also, if you get closer to it, it may seem pixelated.
Left and bottom right: 3D point data used to draw a line in space. Seen from the side, they retain a sense of volume. We can go around and pass through them, and they will still appear to be a 3D object.
So, no images: I will have to split the handwritten image into point data that can be used to draw lines.
To make it all look like handwritten text, I also need to know in which order the points were drawn. When writing, the lines come back and cross each other, so the points on the left do not have to appear earlier than the points on the right. But how to get this ordered data? We cannot extract them from the pixel image: of course, we can get the values of the colors of the pixels in the array (x, y), but this will not tell us anything about which point should be painted first.

Behold: graphics of programmers! Suppose we are trying to draw the word “hello” using data, and the dashed line means the point we are currently at. The example on the left shows what happens when we simply read the PNG data and draw the colored pixels on the left earlier than on the right. It looks very weird! Instead, we want to simulate the example on the right , for which we need the order in which the points were drawn.
However, vector data must consist of ordered point data, so we cannot use PNG, but SVG data is fine .
But is Unity capable of this?
Point drawing
As far as I know, Unity does not have native support for extracting point data from SVG. Those who want to use SVG in Unity seem to have to rely on (often expensive) third-party assets with varying levels of support. In addition, these resources seem to be more focused on displaying SVGs rather than retrieving file points / paths data. As stated above, we don’t want to display SVG in space: we just need to get the data of the points in an ordered array.
After thinking about this for a while, I realized that Unity does not support SVG, but it fully supports loading XML through several standard C # classes (for example, XmlDocument ). And SVG is actually just an XML-based vector image format. Can I load SVG data if I just change the extension from .svg to .xml?
Surprisingly, the answer was positive!
Therefore, we created such a workflow: our artists rendered text as outlines in Illustrator, I simplified these outlines to straight lines, exported these outlines data as SVG, converted them to XML (literally just changing the file extension from .svg to .xml) and without any problems loaded them into Unity.

Left: one of the SVGs created by our artists.
Right: XML data. Each curve is simplified to polylines. Based on practical and aesthetic considerations, we decided that all the words in the text will begin to be drawn at the same time.
I did this, and I was pleased to see that we can easily upload data to Unity and transfer them to LineRenderer without any problems. Moreover, since LineRenderer has a billboard effect enabled by default, it looked like a line with 3D volume. Hooray! Handwritten text in AR! The problem is solved, right? How to say…
Handwriting animation implementation
So, I managed to arrange a handwritten text floating in the air, and now I only ("only") had to animate its spelling.
In the first attempt at implementation, I took LineRenderer and wrote a script to gradually add dots to create an animation effect. I was amazed when I saw how much the application began to slow down.
It turns out that adding points to LineRenderer in real time is a very computationally expensive operation. I wanted to avoid parsing complex SVG contours, so I simplified the contour data into polylines, but to save the curvature of the contours it took a lot more points. I had hundreds, and sometimes thousands, of text points, and Unity was not very happy that I was dynamically modifying LineRenderer data. Mobile devices were our target platform, so braking was even more serious.
So, the dynamic addition of points to LineRenderer cannot be implemented. But how to achieve an animation effect without this?
It is known that LineRenderer Unity is an intractable component, and I, of course, could get rid of all this work by buying a third-party asset. But, as in the case of SVG asset packages, many of them were a combination of expensive, complicated or inappropriate solutions for our task. In addition, I, as an encoder, was intrigued by this task, and I tried to solve it, just to enjoy the solution. It seemed to me that there should be a simple solution using the components that I got for free.
I pondered the problem a bit while actively studying the Unity forums. Each time my search ended in nothing. I banged my head against the wall, creating several incomplete solutions, until I realized that I had encountered this problem before, though in a completely different area.
Namely in CSS.
Point animation
I recalled that I read about this issue several years ago on Chris Wong's blog , where he spoke in detail about his solution to the task of creating NYC Taxis: A Day In The Life . He animated taxis moving on a map of Manhattan, but did not know how to make a taxi leave a mark on the map (in SVG format).
However, he found that he could manipulate the stroke-dasharray line parameterto make this effect work. This option turns a solid line into a dashed line and essentially controls the length of the dashed lines and their corresponding spaces. A value of 0 means that there is no space between the dashed lines, that is, they look like a solid line. However, if you increase the value, the line will break into points and lines. Thanks to the implementation of tricky transitions, he was able to animate the line without dynamically adding points.

In addition to stroke-dasharray, CSS encoders can also manipulate the offset of the stroke-dashoffset contour . According to Jake Archibald, stroke-dashoffset controls " where along the contour the first" dashed line "of the dashed line created by stroke-dasharray begins ."
What does it mean? Suppose we change the stroke-dasharray so that the colored dashed line andthe empty space is both stretched over the entire length of the line. With a stroke-dashoffset equal to 0, our line will be colored. But with an increase in the displacement, we shift the beginning of the line further and further along the contour, leaving behind it empty space. And we get an animated curve line!

If we maximize the value of dasharray, then we can use the offset to make the line look animated. Animation taken from Jake Archibald 's excellent interactive demo .
However, it is obvious that in C # we do not have stroke-dasharray or stroke-dashoffset. But we can manipulate the tiling and offset of the material used by our shader. Here we can apply the same principle: if we have a texture that looks like a dashed curve, one part of which is color and the other is transparent, then we can manipulate the tiling and texture offset to smoothly move the texture along the line - that is, switch from color lines to transparent completely without any manipulation of points!

My material is half color (white), half transparent. When manually changing the offset, it seems that the text is being written. (In the application, we manipulate the shader with a simple call to SetTextureOffset .)
That's exactly what we did! Knowing the time it took to create the word, as well as the time it should be written, I was able to simply linearly interpolate the offset based on how close we are to the completion time. And at the same time, manipulation of point values is not required!
The speed and frame rate soared again to heaven, and we were able to see how the AR text itself smoothly and elegantly writes itself in the real world.
In the real world! We started experimenting with z-indexes and text on different layers to give it a greater sense of being in space

I hope you enjoyed this short excursion into the process of curbing my Unity component, known for its inflexibility, to realize a beautiful and low-cost effect. Want to look at other examples of learning AR in art, dance, and music? Watch the video of our trio . Other information about our experiments with AR can be found on our Experiments platform . Here you can try to work with ARCore. Have a nice linear interpolation!