Athenian Word Salad

Digital Captures, Renders and the spaces in-between

Screen shot of Luma.ai render of Byzantine Exhibit at the Numismatic Museum, Athens.

I’ve been in Athens for just over three weeks, and within that time I have explored various forms of imaging with my iPhone 15 Pro Max. Though I cannot publish the coins I have imaged from the Agora on this blog, at this time, I can say that the 3D renders are good but not great. This is in part because I have not established a sound methodology to capture and process the images. Another aspect is that I need a polarization filter for my phone in order to cross-polarize the light and reduce noise in the imaging. A problem I can easily address and will do so in the near future.

Another form of imaging I have been experimenting with is NeRF, which I discuss briefly in this post. Last weekend, I went to the Numismatic Museum, and apart from exploring the Byzantine collection and a hoard of Manuel I coins that I hope to be able to study, I also experimented with imaging the exhibit via Luma.ai and Polycam 3D using video that I collected during my visit. One render was somewhat successful, while the other was an unmitigated disaster. The renders are posted below.

The differences between the Luma render and Polycam are drastic. For Luma, I captured a five-minute video of the room, trying to be as meticulous as possible. I do feel I rushed the capture process, and that is because I was worried about people walking through my video or standing in front of parts of the exhibit and blocking my ability to capture the exhibit properly. For Polycam, I edited the same video down to 3 minutes and then rendered it. I believe the length of the video and the rate at which I captured the images were a dominant feature of these images. I have thought this through a bit and will go back to the museum when I can and re-image the space. My goal is to capture a space with video, render it, and then explore the option for the render’s functionality as an online interactive exhibit. Furthermore, as I have stated before, I am an advocate for accessibility to technology for all archaeologists, indeed, all scholars, in order to create accessible platforms for the public to engage with the past. The smartphone, in this case, the iPhone, is a key component for creating greater accessibility for digital captures.

Concerning photogrammetry, some purists may claim that to obtain the highest quality image for 3D capture; one needs to use a DSLR camera and the appropriate tools to create an environment that allows such captures. This is true, but it’s also a matter of need and purpose. Some claim the iPhone is inadequate, and its computation photography does not effectively capture small details while performing extreme noise reduction, which eliminates minute details. My argument is that the problem that exists for photogrammetry is the costs and accessibility/portability of high-end equipment to archaeological sites and museums.

As phone cameras continue to improve, along with photography apps that provide more control over imaging, we need not disparage the smartphone as an ineffective tool in the archaeologist’s pocket. Rather, embrace a smartphone's capabilities and accessibility for archaeological imaging needs. Exploration of this technology needs to address various methodological processes in order to achieve a sound approach for archaeologists with limited technological knowledge to easily perform 3D renders. NeRF via video capture through a phone could be the next step in the archaeological process. 

Here is a 3D render of a statue at the Stoa of Attalos. The video used for the 3D model is two minutes long and was rendered in Luma.ai (below) and Polycam (left). The Luma render captured the colours well, but the surrounding environment is a mess. This is because I did not focus on the surrounding environment, but as we will see with Polycam, its render removes the surrounding environment and provides a focused render of the object in question. However, despite that the Polycam capture rendered quite well, it did ‘bleech’ much of the stone. I do not know why it did this and have not had time to explore it.

Furthermore, in both cases, some areas of the statue were not rendered because I could not capture them in the video. Here, Luma.ai attempts to fill these gaps, but the rendering is blurry and imperfectly done. But it's AI, and it’s learning, I think. Polycam left holes in the render and blurs the area, which I believe is more acceptable and gives a more “authentic” representation of the object—its digital twin — by not creating a calculated inference of how these spots may look. (Can AI infer?) AI reconstruction of non-captured or ill-captured sections of an object is problematic for future virtual study of an object and can lead to misinterpretations for any scholar who may depend on an image to study rather than the object itself due to access issues. So what is the next step?

I want to load these images into Blender and see what we can do to combine the captures to produce a more accurate representation of the object. This raises a host of questions about the ‘Digital Twin’ and authenticity, but those can be dealt with later.

Previous
Previous

A Dissertation Proposal

Next
Next

Coins and Books