Isabella Di Lenardo
Biography
Isabella di Lenardo is a scientific researcher and lecturer with experience in the fields of Art History and Archaeology, Digital Humanities, and Digital Urban History.She is the coordinator of the EPFL Time Machine Unit at Digital Humanities Institute.
Her training began with archaeology and then moved to Modern Art History, particularly the Venetian context between 1400 and 1800. Her interest primarily lies in the European circulation of artworks and figurative patterns between 1500 and 1650.
She holds a Ph.D in Theories and Art History. Her doctoral dissertation in 2013 provided insightful analysis into the artistic, social, and economic forces driving the trade and circulation of art between key Italian centers and the Flemish cultural area.
Since 2012 she conducted studies on urban history applying digital methodologies, particularly geographic information systems, collaborating on pioneering projects in this field such as Visualizing Venice held by the University Institute of Architecture in Venice, the University of Padua, and Duke University.
In 2014 she joined the Digital Humanities Laboratory at EPFL as a scientific collaborator. She was involved in several projects on urban reconstruction and visual analysis of artworks. She led the Replica project in collaboration with the Giorgio Cini Foundation in Venice, which involved digitizing the historical photo library and creating a search engine for visual similarity and visual genealogy between images. In this context she coordinated teams of researchers, students, professionals and curators to carry the project.
Between 2018 and 2020, she was a post doc at INHA in Paris holding the role of Principal Investigator National Institute of Art History in Paris initiating and leading the "Richelieu District" urban reconstruction project.
She is Co.PI in the project, SNFS, Parcels of Venice aimed at reconstructing the informational and morphological evolution of urban property in Venice between 1700 and 1808. In this project, thousands of land records were extracted and analyzed to allow for the densification of information related to the owners and functions of the urban before and after the fall of the Ancien Regime in Venice. An exploration and research interface is planned between 2024 and 2025.
Starting in 2016, she was among the initiators of the Time Machine: Big Data of the Past for the Future of Europe project. The aim of the project was that Time Machine design and implement advanced new digitization and Artificial Intelligence (AI) technologies to mine Europe's vast cultural heritage, providing fair and free access to information that will support future scientific and technological developments in Europe. From the European Time Machine project originated the Time Machine Organization in which Isabella di Lenardo is coordinator of Local Time Machines on a European scale.
Over the years she has coordinated research teams with diverse profiles: researchers, scholars, public institutions, private foundations, and companies. She is very comfortable in international and interdisciplinary working environments, and regularly acts as an intermediary between computer scientists, humanities scholars, engineers, and representatives of cultural institutions.
She has been teaching ex cathedra courses in Urban History since 2010, at EPFL since 2014 in Digital Urban History and also Digital Art History at other universities on an international scale.
Parcours professionnel
Project Leader, Post Doc
Institut National d’Histoire de l’art de Paris
2018-2020
Post Doc. Researcher
École Polytechnique Fédérale de Lausanne, Digital Humanities Laboratory
2014-2018
Faculty
VIU Venice International University (Venice)
2013-2018
Post Doc. Researcher
IUAV Istituto Universitario d'Architettura (Venice)
2013-2014
Lecturer
IUAV Istituto Universitario d'Architettura (Venice)
2010-2013
Formation
Ph.D
Theories and History of Arts “Honourable mention / Prize of department”
SSAV Scuola Superiore di Studi avanzati in Venezia (Venice) || IUAV Istituto Universitario di Architettura (Venice) || University Ca’ Foscari, scuola interateneo in Storia delle Arti (Venice)
2009 2013
MA
Letters and Philosophy, specialization in Art History and Archeology
University Ca’ Foscari (Venice)
2008
Fellow
Nederlands Interuniversitair Kunsthistorisch Instituut (NIKI)(Florence)
2010-2011
Publications
Publications Infoscience
2024
A fragment-based approach for computing the long-term visual evolution of historical maps
Cartography, as a strategic technology, is a historical marker. Maps are tightly connected to the cultural construction of the environment. The increasing availability of digital collections of historical map images provides an unprecedented opportunity to study large map corpora. Corpus linguistics has led to significant advances in understanding how languages change. Research on large map corpora could in turn significantly contribute to understanding cultural and historical changes. We develop a methodology for cartographic stylometry, with an approach inspired by structuralist linguistics, considering maps as visual language systems. As a case study, we focus on a corpus of 10,000 French and Swiss maps, published between 1600 and 1950. Our method is based on the fragmentation of the map image into elementary map units. A fully interpretable feature representation of these units is computed by contrasting maps from different, coherent cartographic series, based on a set of candidate visual features (texture, morphology, graphical load). The resulting representation effectively distinguishes between map series, enabling the elementary units to be grouped into types, whose distribution can be examined over 350 years. The results show that the analyzed maps underwent a steady abstraction process during the 17th and 18th centuries. The 19th century brought a lasting scission between small- and large-scale maps. Macroscopic trends are also highlighted, such as a surge in the production of fine lines, and an increase in map load, that reveal cultural fashion processes and shifts in mapping practices. This initial research demonstrates how cartographic stylometry can be used for exploratory research on visual languages and cultural evolution in large map corpora, opening an effective dialogue with the history of cartography. It also deepens the understanding of cartography by revealing macroscopic phenomena over the long term.
Humanities & Social Sciences Communications. 2024-03-04. DOI : 10.1057/s41599-024-02840-w.Artificial Intelligence for Digital Heritage Innovation: Setting up a R&D Agenda for Europe
Artificial intelligence (AI) is a game changer in many fields, including cultural heritage. It supports the planning and preservation of heritage sites and cities, enables the creation of virtual experiences to enrich cultural tourism and engagement, supports research, and increases access and understanding of heritage objects. Despite some impressive examples, the full potential of AI for economic, social, and cultural change is not yet fully visible. Against this background, this article aims to (a) highlight the scope of AI in the field of cultural heritage and innovation, (b) highlight the state of the art of AI technologies for cultural heritage, (c) highlight challenges and opportunities, and (d) outline an agenda for AI, cultural heritage, and innovation.
Heritage. 2024-02-01. DOI : 10.3390/heritage7020038.2023
The Skin of Venice: Automatic Facade Extraction from Point Clouds
We propose a method to extract orthogonal views of facades from photogrammetric models of cities. This method was applied to extract all facades of the city of Venice. The result images open up new areas of research in architectural history.
2023-06-30. ADHO Digital Humanities Conference 2023 (DH2023), Graz, Austria, July 10-14 2023. DOI : 10.5281/zenodo.8107943.1805-1898 Census Records of Lausanne : a Long Digital Dataset for Demographic History
This historical dataset stems from the project of automatic extraction of 72 census records of Lausanne, Switzerland. The complete dataset covers a century of historical demography in Lausanne (1805-1898), which corresponds to 18,831 pages, and nearly 6 million cells. Content. The data published in this repository correspond to a first release, i.e. a diachronic slice of one register every 8 to 9 years. Unfortunately, the remaining data are currently under embargo. Their publication will take place as soon as possible, and at the latest by the end of 2023. In the meantime, the data presented here correspond to a large subset of 2,844 pages, which already allows to investigate most research hypotheses. The population censuses, digitized by the Archives of the city of Lausanne, continuously cover the evolution of the population in Lausanne throughout the 19th century, starting in 1805, with only one long interruption from 1814 to 1831. Highly detailed, they are an invaluable source for studying migration, economic and social history, and traces of cultural exchanges not only with Bern, but also with France and Italy. Indeed, the system of tracing family origin, specific to Switzerland, allows to follow the migratory movements of families long before the censuses appeared. The bourgeoisie is also an essential economic tracer. In addition, censuses extensively describe the organization of the social fabric into family nuclei, around which gravitate various boarders, workers, servants or apprentices, often living in the same apartment with the family. Production. The structure and richness of censuses have also provided an opportunity to develop automatic methods for processing structured documents. The processing of censuses includes several steps, from the identification of text segments to the restructuring of information as digital tabular data, through Handwritten Text Recognition and the automatic segmentation of the structure using neural networks. Please note that the detailed extraction methodology, as well as the complete evaluation of performance and reliability is published in: Petitpierre R., Rappo L., Kramer M. (2023). An end-to-end pipeline for historical censuses processing. International Journal on Document Analysis and Recognition (IJDAR). doi: 10.1007/s10032-023-00428-9 Data structure. The data are structured in rows and columns, with each row corresponding to a household. Multiple entries in the same column for a single household are separated by vertical bars 〈|〉. The center point 〈·〉 indicates an empty entry. For some columns (e.g., street name, house number, owner name), an empty entry indicates that the last non-empty value should be carried over. The page number is in the last column. Liability. The data presented here are not curated nor verified. They are the raw results of the extraction, the reliability of which was thoroughly assessed in the above-mentioned publication. We insist on the fact that for any reuse of this data for research purposes, the implementation of an appropriate methodology is necessary. This may typically include string distance heuristics, or statistical methodologies to deal with noise and uncertainty. References: ["Petitpierre R., Rappo L., Kramer M. (2023). An end-to-end pipeline for historical censuses processing. International Journal on Document Analysis and Recognition (IJDAR). doi: 10.1007/s10032-023-00428-9"]
2023-03-20.Recartographier l'espace napoléonien : Une lecture computationnelle du cadastre historique de Lausanne
Le cadastre napoléonien est une source historique relativement homogène et largement répandue. Cela rend une approche computationnelle particulièrement pertinente. Dans cette étude, nous proposons une méthode de reconnaissance et de vectorisation automatique des cartes cadastrales à la pointe de la technologie. Nous démontrons son efficacité sur le cas lausannois et proposons des méthodes d'analyse congruentes.
2023. Humanistica 2023, Association francophone des humanités numériques, Geneva, Switzerland, June 26-28, 2023.Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction
The generation of 3D models depicting cities in the past holds great potential for documentation and educational purposes. However, it is often hindered by incomplete historical data and the specialized expertise required. To address these challenges, we propose a framework for historical city reconstruction. By integrating procedural modeling techniques and machine learning models within a Geographic Information System (GIS) framework, our pipeline allows for effective management of spatial data and the generation of detailed 3D models. We developed an open-source Python module that fills gaps in 2D GIS datasets and directly generates 3D models up to LOD 2.1 from GIS files. The use of the CityJSON format ensures interoperability and accommodates the specific needs of historical models. A practical case study using footprints of the Old City of Jerusalem between 1840 and 1940 demonstrates the creation, completion, and 3D representation of the dataset, highlighting the versatility and effectiveness of our approach. This research contributes to the accessibility and accuracy of historical city models, providing tools for the generation of informative 3D models. By incorporating machine learning models and maintaining the dynamic nature of the models, we ensure the possibility of supporting ongoing updates and refinement based on newly acquired data. Our procedural modeling methodology offers a streamlined and open-source solution for historical city reconstruction, eliminating the need for additional software and increasing the usability and practicality of the process.
Remote Sensing. 2023. DOI : 10.3390/rs15133352.Ce que les machines ont vu et que nous ne savons pas encore
Cet article conceptualise l’idée qu’il existe une « matière noire » composée des structurations latentes identifiées par le regard machinique sur de grandes collections photographiques patrimoniales. Les campagnes photographiques de l’histoire de l’art, au xxe siècle, avaient pour ambition implicite de transformer toutes les œuvres d’art en documents plus facilement étudiables. Au fil du temps, la création de ces collections visuelles a permis de produire un corpus d’informations potentiellement plus dense et plus riche que ce que ses créateurs avaient initialement imaginé. En effet, la conversion numérique de ces immenses corpus visuels permet aujourd’hui de réanalyser ces images avec des techniques de vision par ordinateur, l’intelligence artificielle ouvrant ainsi la voie à des perspectives d’études bien différentes de celles envisageables au siècle dernier. Nous pourrions ainsi dire qu’il y a dans ces images un immense potentiel latent de connaissance, un réseau dense de relations qui n’a pas encore été mis en lumière. Qu’est-ce que les machines ont vu ou vont pouvoir voir dans ces collections d’images que l’homme n’a pas encore identifié ? Quelle étendue la connaissance visuelle de l’homme couvre-t-elle par rapport à ce que la machine a pu analyser ? Les nouvelles techniques d’indexation des images et des motifs qui les constituent nous rapprochent d’une révolution copernicienne du visuel dans laquelle l’homme peut, grâce à la machine-prothèse, analyser beaucoup plus d’images qu’il ne pouvait le faire par une simple activité mnémonique et sélectionner des perspectives spécifiques en comparant des ensembles de motifs les uns par rapport aux autres. Cette vision augmentée est fondée sur une pré-analyse conduite par la machine sur l’ensemble de ces corpus visuels, un entraînement qui permet de retrouver la structure sous-jacente du système d’images. La vision humaine est ainsi étendue par le regard artificiel préalable de la machine. Pour comprendre les enjeux de cette nouvelle alliance, il faut étudier la nature de ce regard artificiel, comprendre son potentiel pour découvrir des structures jusqu’à présent inconnues et anticiper les nouvelles formes de connaissances humaines auxquelles il pourra donner naissance. L’enjeu sera donc, pour les prochaines années, de comprendre ce que les machines ont vu et que nous ne savons pas encore.
Sociétés & Représentations. 2023. DOI : 10.3917/sr.055.0249.Lausanne Historical Censuses Dataset HTR 35k
This training dataset includes a total of 34,913 manually transcribed text segments. It is dedicated to the handwritten text recognition (HTR) of historical sources, typically tabular records, such as censuses. This dataset is based on a sample of 83 pages from the 19th century (1805-1898) censuses of Lausanne, Switzerland. The primary language of the documents is French, although many germanic names and toponyms are also found. The training data are formatted and provided on the model of the Bentham dataset. The format thus simply consists in a list of jpeg images, one per text segments, and their corresponding transcription, stored in a txt file. The file naming convention is 'yyyy-ppp-n', where 'y' stands for the year of publication of the census, and 'p' for the page number. The digitized documents are provided by the Archives of the City of Lausanne. Please note that the annotation and extraction methodology, as well as the complete evaluation of performance, including HTR benchmark and post-correction performance is published in : Petitpierre R., Rappo L., Kramer M. (2023). An end-to-end pipeline for historical censuses processing. International Journal on Document Analysis and Recognition (IJDAR). doi: 10.1007/s10032-023-00428-9 Tabular dataset resulting from automatic extraction are also available on Zenodo : Petitpierre R., Rappo L., Kramer M., di Lenardo I. (2023). 1805-1898 Census Records of Lausanne : a Long Digital Dataset for Demographic History. Zenodo. doi: 10.5281/zenodo.7711640
2023.2022
A data structure for scientific models of historical cities: extending the CityJSON format
In the field of the 3D reconstruction of cities in the past there is a raising interest in the creation of models that are not just geometrical, but also informative, semantic and georeferenced. Despite the advancements that were done in the historical reconstruction of architecture and archaeology, the solutions designed for larger scale models are still very limited. On the other hand, research on the digitisation of current-day cities provides useful instruments. In particular, CityJSON - a JSON encoding of CityGML - represents an easy-to-use and lightweight solution for storing 3D models of cities that are geolocated, semantic and that contain additional information in the form of attributes. This contribution proposes (1) to extend the schema to the needs of a historical representation; and (2) to incorporate the newly created model in a continuous flow pipeline, in which the geometry is dynamically updated each time an attribute is changed, as a means to foster collaboration.
2022-11-11. 6th ACM SIGSPATIAL International Workshop on Geospatial Humanities, Seattle, Washington, November 1, 2022. p. 20-23. DOI : 10.1145/3557919.3565813.The Replica Project: Co-Designing a Discovery Engine for Digital Art History
This article explains how the Replica project is a particular case of different professionals coming together to achieve the digitization of a historical photographic archive, intersecting complementary knowledge specific to normally unconnected communities. In particular the community of Art History researchers, brought together here in relation to their common methodologies in the practice of visual pattern research, became protagonists in the construction of a specific tool, the Morphograph, to navigate through the archive’s photos. A specific research problem, the recognition of visual patterns migrating from one work to another, became the key to developing a new technology initially intended for a specific community of users, but with such a generic character in its approach that it could easily be made available to other uninformed users as learning by doing tools. The Morphograph tool also made it possible to demonstrate how, within a community, the partial expertise of individuality needs to be related to each other and benefits enormously from the knowledge densification mechanism made possible by the sharing. The digital context easily makes it possible to create tools that are specific in terms of content but generic in form that can be communicated and shared with even diverse and uninformed communities.
Multimodal Technologies and Interaction. 2022. DOI : 10.3390/mti6110100.2021
Generic Semantic Segmentation of Historical Maps
Research in automatic map processing is largely focused on homogeneous corpora or even individual maps, leading to inflexible models. Based on two new corpora, the first one centered on maps of Paris and the second one gathering maps of cities from all over the world, we present a method for computing the figurative diversity of cartographic collections. In a second step, we discuss the actual opportunities for CNN-based semantic segmentation of historical city maps. Through several experiments, we analyze the impact of figurative and cultural diversity on the segmentation performance. Finally, we highlight the potential for large-scale and generic algorithms. Training data and code of the described algorithms are made open-source and published with this article.
2021-11-17. CHR 2021: Computational Humanities Research Conference, Amsterdam, The Netherlands, November 17-19, 2021. p. 228-248.Aux portes du monde miroir
The Mirror World is no longer an imaginary device, a mirage in a distant future, it is a reality under construction. In Europe, Asia and on the American continent, large companies and the best universities are working to build the infrastructures, to define their functionalities, to specify their logistics. The Mirror World, in its asymptotic form, presents a quasi-continuous representation of the world in motion, integrating, virtually, all photographic perspectives. It is a new giant computational object, opening the way to new research methods or even probably to a new type of science. The economic and cultural stakes of this third platform are immense. If the Mirror World transforms access to knowledge for new generations, as the Web and Social Networks did in their time, it is our responsibility to understand and, if need be, bend its technological trajectory to make this new platform an environment for the critical knowledge of the past and the creative imagination of the future.
Revue Histoire de l’art : Humanités numériques. 2021-06-29.Une approche computationnelle du cadastre napoléonien de Venise
At the beginning of the 19th century, the Napoleonic administration introduced a new standardised description system to give an objective account of the form and functions of the city of Venice. The cadastre, deployed on a European scale, was offering for the first time an articulated and precise view of the structure of the city and its activities, through a methodical approach and standardised categories. With the use of digital techniques, based in particular on deep learning, it is now possible to extract from these documents an accurate and dense representation of the city and its inhabitants. By systematically checking the consistency of the extracted information, these techniques also evaluate the precision and systematicity of the surveyors’ work and therefore indirectly qualify the trust to be placed in the extracted information. This article reviews the history of this computational protosystem and describes how digital techniques offer not only systematic documentation, but also extraction perspectives for latent information, as yet uncharted, but implicitly present in this information system of the past.
Humanités numériques. 2021-05-01. DOI : 10.4000/revuehn.1786.2020
I sistemi di immagini nell’archivio digitale di Vico Magistretti
La messa a disposizione in linea dell’archivio digitalizzato di Vico Magistretti che raggruppa decine di migliaia di disegni preparatori, disegni tecnici e fotografie prodotte tra 1946 e il 2006, apre la strada a un grande rinnovamento delle ricerche sul designer e architetto italiano. L’apertura di questo archivio così speciale ci invita a immaginare diverse prospettive che possono essere considerate per esplorare, visualizzare e studiare un tale insieme di documenti.
Narrare con l'Archivio. Forum internazionale, 19 novembre 2020, Milan, Italy, Novembre 19, 2020.Swiss in motion : Analyser et visualiser les rythmes quotidiens. Une première approche à partir du dispositif Time-Machine.
Au cours des 50 dernières années, les développements technologiques dans le domaine des transports et des télécommunications ont contribué à reconfigurer les comportements spatio-temporels (Kaufmann, 2008). Les individus bénéficient ainsi d’un large univers de choix en matière de modes de transport et de lieux accessibles pour réaliser leurs activités. Cette configuration influence en particulier les comportements de mobilité quotidienne qui tendent à se complexifier tant dans leur dimension spatiale que temporelle impliquant ainsi l’émergence de rythmes quotidiens intenses et complexes (Drevon, Gwiazdzinski, & Klein, 2017; Gutiérrez & García-Palomares, 2007). Des recherches récentes menées sur la Suisse (Drevon, Gumy, & Kaufmann, 2020) suggèrent que les rythmes quotidiens sont marqués par une importante diversité en matière de configuration spatio-temporelle et de densité d’activités (Drevon, Gumy, Kaufmann, & Hausser, 2019). La part des rythmes quotidiens qui correspond à la figure du métro-boulot-dodo est finalement relativement modeste. Cette diversité de rythmes quotidiens se déploie entre d’un côté des comportements très complexes et d’autres peu complexes qui se matérialisent à différentes échelles spatiales. Force est de constater que les outils d’analyse actuels en sciences sociales et en socio-économie des transports peinent encore à rendre compte des formes complexes de rythmes quotidiens au niveau individuel et territorial. Face à cet enjeu épistémologique et méthodologique, la communication propose une approche innovante et interdisciplinaire qui associe la Sociologie, la Géographie et les Sciences computationnelle. Il s’agit concrètement de proposer un outil de géo-visualisation des rythmes quotidiens au échelles individuelles et territoriales à partir des comportements spatio-temporels des habitants de la Suisse. L’objectif de cette démarche est de mettre en perspective les différentiels d’intensité en matière d’activité entre les situations sociales et les territoires. Les analyses s’appuient sur l’enquête Microrecensement Mobilité et Transports (MRMT) réalisée tous les 5 ans à l’échelle nationale par l’Office fédéral de la statistique et l’Office fédéral du développement territorial réalisé en 2015. Cette enquête est composée d’un échantillon 57 090 personnes qui ont été interrogées sur l’ensemble de leurs déplacements effectués la veille du jour d’enquête (protocole d’enquête CATI). La visualisation est réalisée à partir du dispositif Time-Machine (Kaplan, 2013; di Lenardo & Kaplan, 2015) qui permet de modéliser un environnement virtuel en 4D (Figure 1 : https://youtu.be/41-klvXLCqM) et de simuler le déploiement des activités et des déplacements quotidiens. Les premières simulations révèlent des régimes rythmiques contrastés à l’échelle individuelle qui se différencient selon les allures, la fréquence d’actions, l’échelle spatiale et la position sociale. Au niveau territoriales les visualisations laissent apparaitre des différentiels importants dans l’intensité d’usage du territoire par les individus et des spécificités spatiales constitutives des activités qui y sont réalisées. Ces premières visualisations permettent d’abord de révéler des inégalités sociales (genre, classe) face aux injonctions à l’activité (Viry, Ravalet, & Kaufmann, 2015; Drevon, 2019; Drevon & Kaufmann, 2020). Elle permettent aussi de rediscuter des modalités de catégorisation des territoires (Rérat, 2008; Schuler et al., 2007) à partir d’une approche dynamique qui témoigne de la réalité des activités temporaires remettant par ailleurs en perspective les principes de l’écologie urbaine factorielle (Pruvot & Weber-Klein, 1984) et en renforçant également l’intérêt de l’économie présentielle (Lejoux, 2009).
Swiss Mobility Conference, Lausanne, October 29-30, 2020.The Advent of the 4D Mirror World
The 4D Mirror World is considered to be the next planetary-scale information platform. This commentary gives an overview of the history of the converging trends that have progressively shaped this concept. It retraces how large-scale photographic surveys served to build the first 3D models of buildings, cities, and territories, how these models got shaped into physical and virtual globes, and how eventually the temporal dimension was introduced as an additional way for navigating not only through space but also through time. The underlying assumption of the early large-scale photographic campaign was that image archives had deeper depths of latent knowledge still to be mined. The technology that currently permits the advent of the 4D World through new articulations of dense photographic material combining aerial imagery, historic photo archives, huge video libraries, and crowd-sourced photo documentation precisely exploits this latent potential. Through the automatic recognition of “homologous points,” the photographic material gets connected in time and space, enabling the geometrical computation of hypothetical reconstructions accounting for a perpetually evolving reality. The 4D world emerges as a series of sparse spatiotemporal zones that are progressively connected, forming a denser fabric of representations. On this 4D skeleton, information of cadastral maps, BIM data, or any other specific layers of a geographical information system can be easily articulated. Most of our future planning activities will use it as a way not only to have smooth access to the past but also to plan collectively shared scenarios for the future.
Urban Planning. 2020-06-30. DOI : 10.17645/up.v5i2.3133.Reconstruire la densité historique du « Quartier Richelieu »
Dans le projet « Richelieu. Histoire du quartier », de jeunes ingénieurs en humanités numériques et des historiens ont commencé à analyser les adresses du Bottin du commerce sur la période 1839-1922. La technique a permis d’identifier les informations correspondant à environ 200 000 adresses du quartier Richelieu. Une base géographique a été développée pour visualiser, aujourd'hui en 2D, toutes les occupations et les adresses sur une carte, ainsi que toutes les évolutions historiques du réseau routier et des bâtiments. C’est la première fois que toutes les adresses des Annuaires du Commerce sont massivement extraites grâce à des méthodologies de pointe dans les documents anciens.
2020-04-23.Building a Mirror World for Venice
Between 2012 and 2019, ‘TheVeniceTime Machine Project’ developed a new methodology for modelling the past, present, and future of a city. This methodology is based on two pillars: (a) the vast digitisation and processing of the selected city’s historical records, (b) the digitisation of the city itself, another vast undertaking. The combination of these two processes has the potential to create a new kind of historical information system organised around a diachronic digital twin of a city.
The Aura in the Age of Digital Materiality : Rethinking Preservation in the Shadow of an Uncertain Future; Milan: SilvanaEditoriale, 2020.2019
A deep learning approach to Cadastral Computing
This article presents a fully automatic pipeline to transform the Napoleonic Cadastres into an information system. The cadastres established during the first years of the 19th century cover a large part of Europe. For many cities they give one of the first geometrical surveys, linking precise parcels with identification numbers. These identification numbers points to registers where the names of the proprietary. As the Napoleonic cadastres include millions of parcels , it therefore offers a detailed snapshot of large part of Europe’s population at the beginning of the 19th century. As many kinds of computation can be done on such a large object, we use the neologism “cadastral computing” to refer to the operations performed on such datasets. This approach is the first fully automatic pipeline to transform the Napoleonic Cadastres into an information system.
2019-07-11. Digital Humanities Conference, Utrecht, Netherlands, July 8-12, 2019.Repopulating Paris: massive extraction of 4 Million addresses from city directories between 1839 and 1922
In 1839, in Paris, the Maison Didot bought the Bottin company. Sébastien Bottin trained as a statistician was the initiator of a high impact yearly publication, called “Almanachs" containing the listing of residents, businesses and institutions, arranged geographically, alphabetically and by activity typologies (Fig. 1). These regular publications encountered a great success. In 1820, the Parisian Bottin Almanach contained more than 50 000 addresses and until the end of the 20th century the word “Bottin” was the colloquial term to designate a city directory in France. The publication of the “Didot-Bottin” continued at an annual rhythm, mapping the evolution of the active population of Paris and other cities in France.The relevance of automatically mining city directories for historical reconstruction has already been argued by several authors (e.g Osborne, N., Hamilton, G. and Macdonald, S. 2014 or Berenbaum, D. et al. (2016). This article reports on the extraction and analysis of the data contained in “Didot-Bottin” covering the period 1839-1922 for Paris, digitized by the Bibliotheque nationale de France. We process more than 27 500 pages to create a database of 4,2 Million entries linking addresses, person mention and activities.
2019-07-02. Digital Humanities Conference 2019 (DH2019), Utrecht , the Netherlands, July 9-12, 2019. DOI : 10.34894/MNF5VQ.Can Venice be saved?
Will Venice be inhabitable in 2100? What kinds of policies can we develop to navigate the best scenarios for this floating city? In 2012, the École Polytechnique Fédérale de Lausanne (EPFL) and the University Ca’Foscari launched a programme called the Venice Time Machine to create a large-scale digitisation project transforming Venice’s heritage into ‘big data’. Thanks to the support of the Lombard Odier Foundation, millions of pages and photographs have been scanned at the state archive in Venice and at the Fondazione Giorgio Cini. While commercial robotic scanners were used at the archives, a new typology of robotised circular table was developed by Adam Lowe and his team at Factum Arte to process the million photographs of Fondazione Giorgio Cini. The documents were analysed using deep-learning artificial-intelligence methods to extract their textual and iconographic content and to make the data accessible via a search engine. Also during this time, thousands of primary and secondary sources were compiled to create the first 4D model (3D + time) of the city, showing the evolution of its urban fabric. This model and the other data compiled by the Venice Time Machine were part of an exhibition at the Venice Pavilion of the Biennale of Architecture in 2018, shown side-by-side with potential projects for Venice’s future. Having reached an important milestone in convincing not only the Venetian stakeholders but also a growing number of partners around the world that care about Venice’s future, the Venice Time Machine is now raising funds for the most ambitious simulation of the city that has ever been developed. Its planned activities include a high-resolution digitisation campaign of the entire city at centimetre scale, a crucial step on which to base a future simulation of the city’s evolution, while also creating a digital model that can be used for preservation regardless of what occurs in the coming decades. On the island of San Giorgio Maggiore, a digitisation centre called ARCHiVe (Analysis and Recording of Cultural Heritage in Venice) opened in 2018 to process a large variety of Venetian artefacts. This is a joint effort of Factum Foundation, the École Polytechnique Fédérale de Lausanne and the Fondazione Giorgio Cini, along with philanthropic support from the Helen Hamlyn Trust. The centre aims to become a training centre for future cultural heritage professionals who would like to learn how they can use artificial intelligence and robotics to preserve documents, objects and sites. These operations will work together to create a multiscale digital model of Venice, combining the most precise 4D information on the evolution of the city and its population with all the available documentation of its past. The project aims to demonstrate how this ‘digital double’ can be achieved by using robotic technology to scan the city and its archives on a massive scale, using artificial intelligence techniques to process documents and collecting the efforts of thousands of enthusiastic Venetians. In a project called ‘Venice 2100’, the Venice Time Machine team’s ambition is to show how a collectively built information system can be used to build realistic future scenarios, blending ecological and social data into large-scale simulations. The Venice Time Machine’s ‘hypermodel’ will also create economic opportunities. If its hypotheses are valid, Venice could host the first incubators for start-ups using big data of the past to develop services for smart cities, creative industries, education, academic scholarship and policy making. This could be the beginning of a renewal of Venice’s economic life, encouraging younger generations to pursue activities in the historic city, at the heart of what may become one of the first AI-monitored cities of the world. Venice can reinvent itself as the city that put the most advanced information technology and cultural heritage at the core of its survival and its strategy for development. Artificial intelligence can not only save Venice, but Venice can be the place to invent a new form of artificial intelligence.
Apollo, The International Art Magazine. 2019-01-02.Digital Cultural Heritage meets Digital Humanities
Digital Cultural Heritage and Digital Humanities are, historically seen, in focus of different communities as well as approaching different research topics and - from an organizational point of view - departments. However, are they that different? The idea of this joint article involving digital humanists and heritage researchers is to examine communities, concepts and research applications as well as shared challenges. Beyond a collection of problem-centred essays this is intended to initiate a fruitful discussion about commonalities and differences between both scholarly fields as well as to assess to which extent they are two sides of the same medal.
2019-01-01. 27th CIPA International Symposium on Documenting the Past for a Better Future, Avila, Spain, September 1-5, 2019. p. 812-820. DOI : 10.5194/isprs-archives-XLII-2-W15-813-2019.Frederic Kaplan Isabella di Lenardo
Apollo-The International Art Magazine. 2019-01-01.2018
New Techniques for the Digitization of Art Historical Photographic Archives - the Case of the Cini Foundation in Venice
Numerous libraries and museums hold large art historical photographic collections, numbering millions of images. Because of their non-standard format, these collections pose special challenges for digitization. This paper address these difficulties by proposing new techniques developed for the digitization of the photographic archive of the Cini Foundation. This included the creation of a custom-built circular, rotating scanner. The resulting digital images were then automatically indexed, while artificial intelligence techniques were employed in information extraction. Combined, these tools vastly sped processes which were traditionally undertaken manually, paving the way for new ways of exploring the collections.
Archiving Conference. 2018-02-01. DOI : 10.2352/issn.2168-3204.2018.1.0.2.Dürer tra Norimberga e Venezia, 1506-1507
Dürer e il Rinascimento, tra Germania e Italia; 24 Ore Cultura, 2018.Extracting And Aligning Artist Names in Digitized Art Historical Archives
The largest collections of art historical images are not found online but are safeguarded by museums and other cultural institutions in photographic libraries. These collections can encompass millions of reproductions of paintings, drawings, engravings and sculptures. The 14 largest institutions hold together an estimated 31 million images (Pharos). Manual digitization and extraction of image metadata undertaken over the years has succeeded in placing less than 100,000 of these items for search online. Given the sheer size of the corpus, it is pressing to devise new ways for the automatic digitization of these art historical archives and the extraction of their descriptive information (metadata which can contain artist names, image titles, and holding collection). This paper focuses on the crucial pre-processing steps that permit the extraction of information directly from scans of a digitized photo collection. Taking the photographic library of the Giorgio Cini Foundation in Venice as a case study, this paper presents a technical pipeline which can be employed in the automatic digitization and information extraction of large collections of art historical images. In particular, it details the automatic extraction and alignment of artist names to known databases, which opens a window into a collection whose contents are unknown. Numbering nearing one million images, the art history library of the Cini Foundation was established in the mid-twentieth century to collect and record the history of Venetian art. The current study examines the corpus of the 330’000+ digitized images.
2018. Digital Humanities Conference 2018 Puentes-Bridges, Mexico City, June 26-29, 2018."Een Italische Keucken van Dirick de Vriese" The Commercialisation of the Artistic Identity between Venice and the 'North'
In the second half of the sixteenth century the artistic exchanges between Venice and the Low Countries intensified. Although no Venetian painters settled in Antwerp or in the cities of the Low Countries, several painters of Flemish origin, in particular Dirck de Vries and Ludovico Pozzoserrato, moved to Venice. These two personalities fostered the circulation in Venice of paintings produced in Flanders and, in the meantime, produced paintings featuring some subjects characterized by a marked Venetian identity.
Artibus Et Historiae. 2018-01-01.Making large art historical photo archives searchable
In recent years, museums, archives and other cultural institutions have initiated important programs to digitize their collections. Millions of artefacts (paintings, engravings, drawings, ancient photographs) are now represented in digital photographic format. Furthermore, through progress in standardization, a growing portion of these images are now available online, in an easily accessible manner. This thesis studies how such large-scale art history collection can be made searchable using new deep learning approaches for processing and comparing images. It takes as a case study the processing of the photo archive of the Foundation Giorgio Cini, where more than 300'000 images have been digitized. We demonstrate how a generic processing pipeline can reliably extract the visual and textual content of scanned images, opening up ways to efficiently digitize large photo-collections. Then, by leveraging an annotated graph of visual connections, a metric is learnt that allows clustering and searching through artwork reproductions independently of their medium, effectively solving a difficult problem of cross-domain image search. Finally, the thesis studies how a complex Web Interface allows users to perform different searches based on this metric. We also evaluate the process by which users can annotate elements of interest during their navigation to be added to the database, allowing the system to be trained further and give better results. By documenting a complete approach on how to go from a physical photo-archive to a state-of-the-art navigation system, this thesis paves the way for a global search engine across the world's photo archives.
Lausanne, EPFL, 2018. DOI : 10.5075/epfl-thesis-8857.2017
Tracking Transmission of Details in Paintings
In previous articles (di Lenardo et al, 2016; Seguin et al, 2016), we explored how efficient visual search engines operating not on the basis of textual metadata but directly through visual queries, could fundamen- tally change the navigation in large databases of work of arts. In the present work, we extended our search engine in order to be able to search not only for global similarity between paintings, but also for matching de- tails. This feature is of crucial importance for retriev- ing the visual genealogy of a painting, as it is often the case that one composition simply reuses a few ele- ments of other works. For instance, some workshops of the 16th century had repertoires of specific charac- ters (a peasant smoking a pipe, a couple of dancing, etc.) and anatomical parts (head poses, hands, etc.) ,that they reused in many compositions (van den Brink, 2001; Tagliaferro et al, 2009). In some cases it is possible to track the circulation of these visual pat- terns over long spatial and temporal migrations, as they are progressively copied by several generations of painters. Identifying these links permits to recon- struct the production context of a painting, and the connections between workshops and artists. In addi- tion, it permits a fine-grained study of taste evolution in the history of collections, following specific motives successfully reused in a large number of paintings. Tracking these graphical replicators is challenging as they can vary in texture and medium. For instance, a particular character or a head pose of a painting may have been copied from a drawing, an engraving or a tapestry. It is therefore important that the search for matching details still detects visual reuse even across such different media and styles. In the rest of the pa- per, we describe the matching method and discuss some results obtained using this approach.
2017-01-27. Alliance of Digital Humanities Organizations (ADHO), Montréal, Canada, 8-11 ; 08 ; 2017.Machine Vision Algorithms on Cadaster Plans
Cadaster plans are cornerstones for reconstructing dense representations of the history of the city. They provide information about the city urban shape, enabling to reconstruct footprints of most important urban components as well as information about the urban population and city functions. However, as some of these handwritten documents are more than 200 years old, the establishment of processing pipeline for interpreting them remains extremely challenging. We present the first implementation of a fully automated process capable of segmenting and interpreting Napoleonic Cadaster Maps of the Veneto Region dating from the beginning of the 19th century. Our system extracts the geometry of each of the drawn parcels, classifies, reads and interprets the handwritten labels.
2017. Premiere Annual Conference of the International Alliance of Digital Humanities Organizations (DH 2017), Montreal, Canada, August 8-11, 2017.Machine Vision Algorithms on Cadaster Plans
Cadaster plans are cornerstones for reconstructing dense representations of the history of the city. They provide information about the city urban shape, enabling to reconstruct footprints of most important urban components as well as information about the urban population and city functions. However, as some of these handwritten documents are more than 200 years old, the establishment of processing pipeline for interpreting them remains extremely challenging. We present the first implementation of a fully automated process capable of segmenting and interpreting Napoleonic Cadaster Maps of the Veneto Region dating from the beginning of the 19th century. Our system extracts the geometry of each of the drawn parcels, classifies, reads and interprets the handwritten labels.
Premiere Annual Conference of the International Alliance of Digital Humanities Organizations (DH 2017), Montreal, Canada, August 8-11, 2017.Big Data of the Past
Big Data is not a new phenomenon. History is punctuated by regimes of data acceleration, characterized by feelings of information overload accompanied by periods of social transformation and the invention of new technologies. During these moments, private organizations, administrative powers, and sometimes isolated individuals have produced important datasets, organized following a logic that is often subsequently superseded but was at the time, nevertheless, coherent. To be translated into relevant sources of information about our past, these document series need to be redocumented using contemporary paradigms. The intellectual, methodological, and technological challenges linked to this translation process are the central subject of this article.
Frontiers in Digital Humanities. 2017. DOI : 10.3389/fdigh.2017.00012.Optimized scripting in Massive Open Online Courses
The Time Machine MOOC, currently under preparation, is designed to provide the necessary knowledge for students to use the editing tool of the Time Machine platform. The first test case of the platform in centered on our current work on the City of Venice and its archives. Small Teaching modules focus on specific skills of increasing difficulty: segmenting a word on a page, transcribing a word from a document series, georeferencing ancient maps using homologous points, disambiguating named entities, redrawing urban structures, finding matching details between paintings and writing scripts that perform automatically some of these tasks. Other skills include actions in the physical world, like scanning pages, books, maps or performing a photogrammetric reconstruction of a sculpture taking a large number of pictures. Eventually, some other modules are dedicated to general historic, linguistic, technical or archival knowledge that constitute prerequisites for mastering specific tasks. A general dependency graph has been designed, specifying in which order the skills can be acquired. The performance of most tasks can be tested using some pre-defined exercises and evaluation metrics, which allows for a precise evaluation of the level of mastery of each student. When the student successfully passes the test related to a skill, he or she gets the credentials to use that specific tool in the platform and starts contributing. However, the teaching options can vary greatly for each skill. Building upon the script concept developed by Dillenbourg and colleagues, we designed each tutorial as a parameterized sequence. A simple gradient descent method is used to progressively optimize the parameters in order to maximize the success rate of the students at the skill tests and therefore seek a form of optimality among the various design choices for the teaching methods. Thus, the more students use the platform, the more efficient teaching scripts become.
Dariah Teach, Université de Lausanne, Switzerland, March 23-24, 2017.2016
Visual Link Retrieval in a Database of Paintings
This paper examines how far state-of-the-art machine vision algorithms can be used to retrieve common visual patterns shared by series of paintings. The research of such visual patterns, central to Art History Research, is challenging because of the diversity of similarity criteria that could relevantly demonstrate genealogical links. We design a methodology and a tool to annotate efficiently clusters of similar paintings and test various algorithms in a retrieval task. We show that pretrained convolutional neural network can perform better for this task than other machine vision methods aimed at photograph analysis. We also show that retrieval performance can be significantly improved by fine-tuning a network specifically for this task.
2016. VISART Workshop, ECCV, Amsterdam, September, 2016. DOI : 10.1007/978-3-319-46604-0_52.Visual Patterns Discovery in Large Databases of Paintings
The digitization of large databases of works of arts photographs opens new avenue for research in art history. For instance, collecting and analyzing painting representations beyond the relatively small number of commonly accessible works was previously extremely challenging. In the coming years,researchers are likely to have an easier access not only to representations of paintings from museums archives but also from private collections, fine arts auction houses, art historian However, the access to large online database is in itself not sufficient. There is a need for efficient search engines, capable of searching painting representations not only on the basis of textual metadata but also directly through visual queries. In this paper we explore how convolutional neural network descriptors can be used in combination with algebraic queries to express powerful search queries in the context of art history research.
2016. Digital Humanities 2016, Krakow, Polland, July 11-16, 2016.2015
Venezia e l’invenzione del paesaggio urbano tra laguna e città
Acqua e Cibo. Storie di Laguna e Città; Marsilio, 2015. p. 35-39.Venice Time Machine : Recreating the density of the past
This article discusses the methodology used in the Venice Time Machine project (http://vtm.epfl.ch) to reconstruct a historical geographical information system covering the social and urban evolution of Venice over a period of 1,000 years. Given the time span considered, the project used a combination of sources and a specific approach to align heterogeneous historical evidence into a single geographic database. The project is based on a mass digitization project of one of the largest archives in Venice, the Archivio di Stato. One goal of the project is to build a kind of ‘Google map’ of the past, presenting a hypothetical reconstruction of Venice in 2D and 3D for any year starting from the origins of the city to present-day Venice.
2015. Digital Humanities 2015, Sydney, June 29 - July 3, 2015.2014
Carlo Helman : merchant, patron and collector in the Antwerp – Venice migrant network
This contribution is part of the monographic number of the Nederlands Yearbook for History of Art dedicated to a large overview on the “Art and Migration. Nethelandish Artists on the Move, 1400-1750”. In the dynamics of migration, circulation, establishing trough Europe in the Modern Era, the network’s analysis play a fundamental role. The essay explores the prominent role played by Antwerp merchants in Venice in forging contacts between artists, patrons and agent of art in promoting the exchange of goods and ideas within their adopted home. In the course of the 16th century, and more particularly towards the end of that period, the complex network of Netherlandish merchant families, operating on a European level, played a crucial role in the circulation of artists, paintings and other artworks in Italy and beyond. The article proposed here deals with Carlo Helman, a Venetian resident of Antwerp origins, a major figure whose importance in this context has been insufficiently studied. Helman’s family firm traded in practically every kind of commodity, ranging from wool and spices to pearls and diamonds, and, indeed, artworks, “in omnibus mundis regnis”, as we read in the commemorative inscription on his monumental tomb in the Venetian church of Santa Maria Formosa. A high-class international trader in Venice, Helman was consul of the “Nattione Fiamenga”. Helman had a conspicuous collection of art, including classics of the “Venetian maniera” like Titian, Veronese and Bassano, but also important pictures by Northern masters. Moreover, his collection contained a remarkable cartographic section. In Venice, Helman had contacts with the Bassano dynasty, Paolo Fiammingo, Dirck de Vries, Lodewijck Toeput (Pozzoserrato) and the Sadeler brothers, artists who, in one way or another, introduced novel themes and typologies on the Italian, and, indeed, European market. The dedication to Helman on a print by Raphael Sadeler, reproducing Bassano’s Parable of the Sower, photographs the merchant’s role in the diffusion of Bassanesque themes in the North. Helman’s connections with the Zanfort brothers, dealers in tapestries and commercial agents of Hieronymus Cock are further indications of the merchant’s exemplary role of collector, merchant and agent of artists in a European network of “art” commerce.
Art and Migration. Netherlandish Artists on the Move, 1400-1750; Leiden: Brill, 2014. p. 325-347.“Cities of Fire”. Iconography, Fortune and the Circulation of Fire Paintings in Flanders and Italy in the XVI Century.
The Wounded City. The representation of Urban Disasters in European Art (XV-XX Centuries); Leiden: Brill, 2014. p. 100–115.2013
From calle to insula: the case of Santa Maria della Carità in Venice
The histories of the monastery of Santa Maria della Carità and of the Scuola Grande della Carità are interwoven with the urban physiognomy of the extreme offshoot of the sestiere of Dorsoduro, the very tip of which overlooks St. Mark’s basin, the island of San Giorgio Maggiore (where the Benedictine monastery lies) and the island of the Giudecca.The history of the original founding of the monastery of Santa Maria della Carità has its beginnings on land owned by the Zulian family, on which Marco Zulian had decided to establish a place of worship surrounded by other properties owned by the family1. The land was located along the San Vidal canal, which would eventually become the Grand Canal. The monastery had been affiliated with Santa Maria in Porto outside Ravenna since1134, and the decision to relocate seems to have been imposed from aboveby Pope Innocent II, who urged the canons either to establish themselves inthe assigned seat or to give it up. A few years later the new coenoby cameinto its own, cutting loose from the founder’s family and following the Ruleof St. Augustine. The monastery’s next two settlements in the lagoon were San Salvador and San Clemente in Isola, the religious founding of which was promoted by Enrico Dandolo. Both were crucial parts of Venice’s early urban fabric: the church of San Salvador was built upon divine revelation in the central commercial area of Rialto while the monastery of San Clemente was a resting-place for pilgrims on the island of the same name, located on the route connecting the area of St. Mark’s, the Lido and the mouths of the lagoon. The complex of the Trinità, located near the abbey of San Gregorio and thus connected to the monastery of Santa Maria della Carità, was another transit point for pilgrims en route to the Holy Land2.
Built city, designed city, virtual city, the museum of the city; Roma: Croma, 2013. p. 153-169.The Ghetto inside out
Venice: Corte del Fondego.2010
The oltramontani Network in Venice: Hans von Aachen in Context
Thanks to recent archival and historical researches it is now possible to specify the identity of some personalities told in the Lives of Van Mander relating and in close contact with Hans von Aachen. The reconstruction of Venice and Treviso context, in which the artist moves, shows a thick network of relationships woven by Flemish and German communities. The presence of a portrait by Hans von Aachen in the collection of paintings of Francesco Vrients is information very valuable: firstly outlines the painter as an intimate friend of the family Vrients, and in the same time the discovery of the inscription on the drawings of Cephalus and Procri (presented for this exhbition) it is an important pointer for profiling the Vrients-circle and its relationships with the flemish jewellers lobby. Indeed is him the collector of Maastricht mentioned by Van Mander and one of the most eminent flemish personality in the lagoon, around whom, probably, gravitated intellectuals and artists: is a fact that in his house, in Campo Santa Maria Formosa, found hospitality the literate Pieter Cornelisz de Hooft on the occasion of his trip in Italy in 1599. Additional documents shall also specify the role of Gaspar Rem in a venetian and international context: his strong tie to the circle of the “Sadelers” who, especially with a shrewd art dealer like Giusto, play a crucial role promoting artists “Oltramontani” weaving friendship with Dirck de Vries, Rottenhammer, Joannes Koenig to name a few.
2010. Hans von Aachen in Context, Proceedings of the International Conference, Prague, September 22–25, 2010. p. 28-37.Sélection de publications
Autres publications
Conferences
2021-04 SPEAKER “Large Scale Visual Pattern Search. New Perspectives on Renaissance Artwork motives circulation”, Renaissance Society of America Conference, Dublin 2021 (online - Covid format compatible)
2021-05 INVITED SPEAKER Metodi e AI per estrarre e esplorare informazioni in large historical corpora” - “Methods and AI for extracting and mining data information in large historical corpora” , Università Bocconi (Milano) on line seminar for faculty members (online - Covid format compatible)
2021-05 INVITED SPEAKER “Lausanne Time Machine”, Réunion annuel de l’Association vaudoise des archivistes (online - Covid format compatible)
2020-11 SPEAKER “Paintings by AI. Large Scale search for Visual Similarities” , in La mesure des images : approches computationnelles en histoire et théorie des arts, DHNord 2020 (on line)
2019-12 INVITED SPEAKER “Chercher dans les grands corpus d’images à travers l’Intelligence Artificielle : défis et résultats” in Association des diplômés et des étudiants de master de l’École nationale des chartes (Ademec), « Intelligence artificielle et institutions patrimoniales : enjeux, défis et opportunités », École nationale des Chartes et Bibliothèque nationale de France, Paris 2019
2019-10 SPEAKER “Repeupler le Quartier Richelieu (1839-1922)”, Semaine académique de l’Université de WUDA et École nationale des Chartes, Paris 2019
2019-10 SPEAKER “Les Musées dans l’ère des répliques authentiques et des Mondes Miroirs”, in Faut-il vider les Bibliotheques et les Musées?, Académie royale des Sciences, des Lettres et des Beaux-Arts de Belgique avec le parrainage du Collège de France, (with Frèdéric Kaplan) Bruxelles 2019
2019 -10 SPEAKER “Lausanne Time Machine”, presentation for the launch of the Center in Digital Humanities EPFL and UNIL, Lausanne 2019
2019-07 SPEAKER “Repopulating Paris: massive extraction of 4 Million addresses from city directories between 1839 and 1922”, (with Albane Descombes) Digital Humanities Conference 2019, Utrecht
2019-07 SPEAKER “A Deep Learning Approach to cadastral computing” , (with Sophia Ares Oliveira) Digital Humanities Conference 2019, Utrecht
2019-06 SPEAKER “Repopulating Paris. 4 Million of addresses (1839-1922)”, Symposium in collaboration with Getty Research Institute L.A., Institut National d’Histoire de l’Art, Paris
2019-05 SPEAKER “Repeupler Paris. 4 millions d’adresses des Almanach-Bottins du commerce”, Atelier Corpus, Bibliothèque national de France
2019-04 INVITED SPEAKER “Venice Time Machine Project, Modelling the Past”, (with Frédéric Kaplan) Journées de la recherche de l'IGN (Institut National de l’Information Geographique et Forestière), Cité Descartes, Marne la Vallée – Paris
2019 -01 INVITED SPEAKER “The Venice Time Machine Project: Digitising Heritage in Time and Space” (with Frédéric Kaplan), Warburg Institute - London, 2019
2018-10 SPEAKER and CHAIR “Time Machine Conference”, EPFL - Lausanne 2018
2018-10 SPEAKER “Richelieu. Histoire du quartier”, Institut National d’Histoire de l’Art (INHA) – Paris
2018-09 SPEAKER “The Digital Cadastre of Venice in 1808”, (with Bastien Tourenc), EAUH European Association of Urban History– Rome 2018
2017-11 INVITED SPEAKER “Le project Time Machine Européenne" Bibliothèque Nationale de France – Atelier Corpus Data BNF – Paris 2017
2017-05 INVITED SPEAKER “The Venice Time Machine Project”, German Center for Art History - Paris 2017
2017-06 INVITED SPEAKER “Web interfaces for 4D Urban Model”, (with Frédéric Kaplan) IGN (Institut National de l’Information Geographique et Forestière) – Paris 2017
2017 -03 SPEAKER “Optimized scripting in Massive Open Online Courses” (with Frédéric Kaplan) Dariah Teach, Université de Lausanne - Lausanne 2017
2017-04 INVITED SPEAKER “ The Venice Time Machine” (with Frédéric Kaplan) European Union and members of the European Archives Group in Malta (EBNA Meeting) – Malta 2017
2016-10 INVITED SPEAKER “European circulation of people, goods and patterns: The Venice Time Machine methodology” (with Frédéric Kaplan) CREATE, An e-humanities perspective – Amsterdam 2016
2016 -03 SESSION ORGANIZER “Images on the Move: The Weaving of Circulations and Transfers during the Renaissance through Digital Analysis”, in RSA : Renaissance Society of America – Boston 2016
2016 -03, SPEAKER “Mapping the Flow of Paintings in the Renaissance” in RSA : Renaissance Society of America – Boston 2016
2015 -09 CHAIR OF SESSION “Le porte dopo le porte. Varchi, barriere, caselli daziari: le chiavi dell'accesso e dell'approvvigionamento urbano”, in AISU Associazione Italiana di Storia Urbana – Padova 2015
2015 -06 SPEAKER “Venice Time Machine: Recreating the density of the past” (with Frédéric Kaplan) Annual conference of the Alliance of Digital Humanities Organizations – Sydney 2015
2015-03 SPEAKER “La numérisation massive des œuvres d’art et ses conséquences sur l’histoire de l’art : REPLICA project” in Cloud Collections. Aspects juridiques, scientifiques et techniques de la numérisation de l’art , Neuchâtel 2015
2014-03 SPEAKER “Trading knowledge across Europe: database analysis networks (1550-1650)” in Annual meeting conference of Renaissance Society of America (RSA) – Berlin 2014
2013-09 CHAIR OF SESSION “The ephemeral city : invention, rhetoric and counterfeiting (XV-XVIII centuries)” in AISU (Associazione italiana storia urbana) Visibile invisibile, percepire la città tra descrizioni e omissioni – Catania 2013
2013-06 SPEAKER “Built cities, Designed cities, Virtual cities : The museum of the city.” Politecnico di Torino for « Can the European cities be considered as a Cultural Heritage ? Per una storia della città europea come Cultural Heritage » – Turin 2013
2012-12 SPEAKER “Patron, Collector and Agent: On Carlo Helman and his Network Role in the Production, Circulation and Consumption of Pictures between Antwerp and Venice, ca. 1600” in Symposium at Kasteel Well (Nederland), for the Nederlands Kunsthistorisch Jaarboek: Artists on the Move: Migrating artists from the Low Countries, 1400-1750, Well-Limbourg (ND)
2012 -03 SPEAKER “Dalla scala urbana all’allestimento : l’insula delle Gallerie dell’Accademia” Politecnico di Torino; convegno “Digital Urban History – La storia della città (raccontata) all’epoca della rivoluzione informatica”, Turin 2012
2011-11 SPEAKER “L’oratorio dei tedeschi. Artisti oltramontani nella chiesa di San Bartolomeo” and curator of the conference La chiesa di San Bartolomeo e la comunità tedesca, Studium Generale Marcianum; Centro Tedesco di Studi Veneziani (Deutsche Studienzentrum in Venedig) – Venice
2011-09 SPEAKER “Mercanti, collezionisti, agenti. Il ruolo delle nationi fiamenga e todesca nella nascita e diffusione dei Generi pittorici” International conference for Foundation Ermitage Italia (FEI): Alle origini dei Generi Pittorici fra l’Italiae l’Europa attorno al 1600 – Ferrara
2011-09 SPEAKER “Cities of Fire”. Iconography, Fortune and the Circulation of Fire Paintings in Flanders and Italy in the XVI Century”. V Convegno AISU, Fuori dall’ordinario: la città di fronte a catastrofi ed eventi eccezionali, Rome
2011-06 SPEAKER “Firenze e Venezia, convergenze: il network nederlandese e i rapporti con il collezionismo mediceo” Lesson for the Istituto Universitario Olandese di Storia dell’Arte di Firenze/ Nederlands Inter Universitair Kunst Historisch Instituut – Florence 2011
2010-09 SPEAKER “The Oltramontani Network in Venice: Hans von Aachen in context” Conference to the Institute of Art History, Academy of Sciences of the Czech Republic: Hans von Aachen and new research in the transfer of artistic ideas into Central Europe –,Prague 2010
2010-04 SPEAKER “Exploring the Natione fiamenga in Venice. The influence of this newly created Establishment and its impact on pictorial exchanges: The Bassano case.” Annual meeting conference of Renaissance Society of America (RSA) – Venice 2010
Recherche
Parcels of Venice
FNS GRANT_NUMBER: 185060During the research activities on 2D and 3D modeling for the Venice Time Machine project, a specific analysis was developed on the georeferencing and feature extraction of the city's Napoleonic cadastre (1808). In 2018, an SNSF project entitled Parcels of Venice was successfully submitted to further the extraction and analysis of cadastral sources. The grant began in May 2019. The project aims to be one of the first attempts to address the density and richness of primary and secondary sources on Venice in the 19th and 18th centuries, trying to overcome their intrinsically fragmented nature to offer an integrated model of the city and its morphological evolution.
Enseignement & Phd
Enseignement
Architecture
Humanities and Social Sciences Program
Doctorants
Guhennec Paul Robert, Petitpierre Rémi Guillaume, Vaienti Beatrice,A dirigé les thèses EPFL de
Seguin Benoît Laurent Auguste ,Histoire Urbaine Digitale
The Digital Urban History course is part of a new range of interdisciplinary and collaborative courses open to UNIL and EPFL students.This course aims to develop interdisciplinary skills by combining the fields of expertise of history and digital studies.
It mainly focuses on theoretical and practical learning of digital methods applied to the analysis of past cities.
The course explores the digitization of historical cartography and information modeling of historical data concerning the city.
The use and extraction of cadastral, demographic, and iconographic sources but also various sources that tell the story from other perspectives such as historical press, trade almanacs, and more. The course has a theoretical part in which various case studies are analyzed across Europe.
Students work on data extracted from ongoing urban analysis projects.
Since 2020 they develop projects on Lausanne and the surrounding area. The site is analyzed in its evolution over time under multiple aspects: the morphological evolution of the city, population history, cultural heritage history, aspects related to uninhabited space and ecology, textual sources such as the press or some literary sources. All the projects are published online.
Digital Humanities
The Digital Humanities course provides a comprehensive theoretical foundation in digital humanities while also offering a hands-on approach to digital prosopography.Students will learn to transform biographical narratives, traced across time and space, into digital data. The primary objective is for participants to create wiki-based sources by piecing together biographical profiles of individuals who, though mentioned in historical records like newspapers, lack an online presence. The curriculum covers essential skills such as wiki syntax, person identification, Ngram analysis, and digital cartography. Through this course, students will uncover the 'dark matter' of history—those personalities referenced in historical documents but absent from the digital realm. This exploration emphasizes the importance of digitizing historical data, a key process for expanding and refining our collective understanding of history.