Athens

Version 2.024

I am back in Athens, Greece, working at the Stoa of Attalos for the Inhabiting Byzantine Athens project under project director Dr. Fotini Kondyli. Over the next two months, I plan to write about this project as I implement Obsidian.md and Photogrammetry to research the wear/use patterns of Middle-Byznatine coins — specifically those coins of Manuel I Komnenos. I am currently updating my Obsidian vault with legacy numismatic (coin) data from the 1930s excavations. I am breaking this into meta/para data and seeking to establish best practices for the entry of said data. But that’s not all.

This is my setup for capturing coin images. The large rectangular object on the mount is a MagSafe filmmaker cage that my phone will attach to. The lightbox is a Foldio2Plus and a basic motorized turntable. When broken down, all of this is very portable and light, which is practical for imaging small objects in the field. More on this method is coming soon.

Concerning photogrammetry and 3D modelling, I am exploring the viability of the iPhone to model objects and landscapes for archaeological research. Why the iPhone? Ease of access and usability. Photogrammetry can be a costly endeavour if you want high-quality and accurate models. High-quality photogrammetry requires a digital DSLR camera, appropriate lens, lights, cross-polarization filters, mounts/tripods, light boxes, etc., all varying depending on your needs and the object you wish to model.

Moreover, for the archaeologists — who are on a fixed and minimal budgets — transportation of all this gear is cumbersome and awkward at best. Not to mention the environmental elements one needs to navigate in the field. All in all, 3D modelling is costly, time-consuming and, at times, a pain in the ass. Nevertheless, there are solutions for archaeologists, and with the explosion of AI in most digital tech, these solutions are becoming more accessible. 

One such solution is Lumalabs.ai, which uses NeRF technology to render 3D models from photos and — EVEN COOLER — from videos. I’ve embedded some examples in this post. To be clear, I am not arguing that this approach to 3D modelling is the ultimate solution, nor is it the only one. Still, I do believe as technology advances, the promise of NeRF and AI will make it easier for archaeologists to model material culture and excavation sites for future research. But you must be asking what NeRF is: No, it is not the toy. 

Neural Radiance Field (NeRF), “in its basic form, [is] a NeRF model [that] represents three-dimensional scenes as a radiance field approximated by a neural network. The radiance field describes color (sic) and volume density for every point and for every viewing direction in the scene” (Gao et al., 2023: 2). In Lumalabs, NeRF uses neural networks to generate complex 3D representations from static or non-static (video) imaging. I upload a video, Lumalabs software processes it, and voila I have a rendered model. For more technical explanations, for which I am not remotely qualified to explain, see Datagen, which provides a breakdown of the many types of NeRF applications, or this paper by Gao, Kyle, et al., 2023. NeRF is entirely new to me, and I have just started to test it on archaeological objects — large-scale features. The renders are excellent but problematic, and I am unsure how this technology at the moment can serve archaeology besides basic visualizations. The reason is simple: the video capture process needs to be seamless, or the NeRF will fill in the gaps and construct (or not construct) the model using its highly complex algorithm. Let’s look at one example. Click this link if you wish to interact with the render.

The above two examples demonstrate some of NeRF’s shortcomings when considering its application for archaeology. As we can see, there are some problematic sections in these renderings. Despite being somewhat meticulous (somewhat because I don’t have a fully developed process for image capture), the image on the left has a hole at the base of the feature. I am presuming this is because this is the point where I began my capture and did not stay in focus for long. The right photo shows how the NeRF filled in a gap with the perimeter wall just north of the entrance to the area.

The wall is underground—under the drain, of course—but it is not. So why did it decide to fill in this section with the wall? I don't know, especially since the video paid ephemeral attention to this area. However, the areas I paid closer attention to in the video came out very well. The image below on the left is of the inside of the drain; the image on the right is of the front of the drain. I paid particular attention to these areas in the video for no particular reason. The renderings came together very well, and I was quite impressed.

Let’s explore another render where I was more careful and mindful of my recording. Click this Link. The link sends you to the render of the Orators Bema, above the retaining wall on the north side of Filopappou Hill. I think it is North…meh. Anyway, I tried to focus on all the details of the Bema, except for the very top, as much as I could, and as you will notice, the render is very good. The Bema render provides exquisite detail from a distance, but as you zoom in, the render breaks down, and details either fade away or no longer resemble what they look like. This tech has only been around for a few years and is still in development. Still, I suspect that with AI advancing at an incomprehensible pace, NeRF will only get exceptionally better. So, what’s the point?

Click on the image to get more details

Lumalabs NeRF capture and rendering is appealing because of its ease of use and low learning curve. Both experienced 3D artists/experts and the uninitiated can use this product to produce outstanding renderings. Thus, I argue that archaeologists should explore NeRF modelling under the framework of public archaeological engagement and enchantment. Archaeologists can use such technology to create open and collaborative archaeology with the public by providing easy-to-create 3D models of archaeological landscapes. Furthermore, for the archaeologist, using NeRF can aid in the 3D documentation of archaeological landscapes and excavation trenches. Even quickly and effectively documenting stratigraphic layers before they are removed/destroyed to reach the next level. Sounds like a video game. We could implement such renderings into archaeological games in the classroom for pedagogical purposes. Like so many other digital approaches to archaeology that are now beginning to incorporate AI, the possibilities are boundless.

Previous
Previous

Coins and Books

Next
Next

Obsidian and Pedagogy