Prof Frederic Kaplan holds the Digital Humanities Chair at Ecole Polytechnique Federale de Lausanne (EPFL) and directs the EPFL Digital Humanities Laboratory (DHLAB). He conducts research projects combining archive digitization, information modeling and museographic design. He is currently directing the “Venice Time Machine”, an international project in collaboration with the Ca’Foscari University in Venice and the Venice State Archives, aiming to model the evolution and history of Venice over a 1000 year period, He is a member of the steering committee of “Time Machine FET Flagship”, a European project involving more than 200 institutions, in competition for a 1 billion euros funding from the European Commission. In parallel of his scientific work, Frederic Kaplan participated to exhibitions in several museums including the Biennale of architecture in Venice, the Grand Palais and the Centre Pompidou in Paris and the Museum of Modern Art in New York.
Frederic Kaplan graduated as an engineer of the Ecole Nationale Supérieure des Telecommunications in Paris and received a PhD degree in Artificial Intelligence from the University Paris VI. Before founding the Digital Humanities Laboratory, he worked ten years as a researcher at the Sony Computer Science Laboratory and six years at the EPFL pedagogical research laboratory. He was also the founder and president of OZWE, now one of the worlds leading studios in immersive gaming.
Frederic Kaplan published more than a hundred scientific papers, 8 books and about 10 patents. He is the chief editor of Frontiers in Digital Humanities and co-directs the Digital Humanities book collection at EPFL Press. He created the first Digital Humanities Master course in Switzerland and is now taking an active role for shaping a complete new curriculum at EPFL. He was the co-local organizer of the Digital Humanities 2014 conference in Lausanne, the largest scientific meeting ever conducted in this domain.
On the web
Talk on the Radio (Avis d'Experts)
Fields of expertise
A deep learning approach to Cadastral Computing
2019-07-11. Digital Humanities Conference, , Utrecht, Netherlands , July 8-12, 2019.
This article presents a fully automatic pipeline to transform the Napoleonic Cadastres into an information system. The cadastres established during the first years of the 19th century cover a large part of Europe. For many cities they give one of the first geometrical surveys, linking precise parcels with identification numbers. These identification numbers points to registers where the names of the proprietary. As the Napoleonic cadastres include millions of parcels , it therefore offers a detailed snapshot of large part of Europe’s population at the beginning of the 19th century. As many kinds of computation can be done on such a large object, we use the neologism “cadastral computing” to refer to the operations performed on such datasets. This approach is the first fully automatic pipeline to transform the Napoleonic Cadastres into an information system.
Informatica per Umanisti: da Venezia al mondo intero attraverso l’Europa
Conferenza per la Società Dante Alighieri, University of Bern, Switzerland, December 10, 2018.
In un momento di apertura del mondo scientifico verso un pubblico più ampio, questa conferenza vuole essere una facile introduzione alle digital humanities. L’argomento del conferenza è infatti l’informatica per umanisti, un nuovo ambito di ricerca che arricchisce le discipline umanistiche attraverso l’uso di nuove tecnologie. La mia esperienza personale sarà il filo conduttore di questa introduzione e la conferenza sarà l’occasione per parlare dei progetti ai quali ho contribuito nel corso degli ultimi cinque anni. Da Parigi a Venezia, da Losanna a Boston, fare ricerca vuol dire fare esperienze in tutto il mondo. Parlerò di Bruno Latour e dei suoi modi d’esistenza, di Frédéric Kaplan e della sua macchina del tempo, di Franco Moretti e della sua lettura a distanza, e di Marilyne Andersen e della sua cartografia delle affinità, tutte persone che ho avuto il piacere di incontrare e hanno arricchito il mio percorso accademico. Attraverso un racconto visuale fatto di immagini e video, vi spiegherò come le Digital Humanities possono rendere archivi, musei e biblioteche luoghi più interessanti per tutti.
dhSegment : A generic deep-learning approach for document segmentation
The 16th International Conference on Frontiers in Handwriting Recognition, Niagara Falls, USA, 5-8 August 2018.
Comparing human and machine performances in transcribing 18th century handwritten Venetian script
2018-07-26. Digital Humanities Conference , Mexico City, Mexico , June 24-29, 2018.
Automatic transcription of handwritten texts has made important progress in the recent years. This increase in performance, essentially due to new architectures combining convolutional neural networks with recurrent neutral networks, opens new avenues for searching in large databases of archival and library records. This paper reports on our recent progress in making million digitized Venetian documents searchable, focusing on a first subset of 18th century fiscal documents from the Venetian State Archives. For this study, about 23’000 image segments containing 55’000 Venetian names of persons and places were manually transcribed by archivists, trained to read such kind of handwritten script. This annotated dataset was used to train and test a deep learning architecture with a performance level (about 10% character error rate) that is satisfactory for search use cases. This paper compares this level of reading performance with the reading capabilities of Italian-speaking transcribers. More than 8500 new human transcriptions were produced, confirming that the amateur transcribers were not as good as the expert. However, on average, the machine outperforms the amateur transcribers in this transcription tasks.
The Scholar Index: Towards a Collaborative Citation Index for the Arts and Humanities
Mexico City, 26-29 June 2018.
Deep Learning for Logic Optimization Algorithms, 2018 IEEE International Symposium on Circuits and Systems (ISCAS)
2018-05-27. 2018 IEEE International Symposium on Circuits and Systems (ISCAS) , Florence, Italy , May 27-30, 2018. p. 1-4.
DOI : 10.1109/ISCAS.2018.8351885.
The slowing down of Moore's law and the emergence of new technologies puts an increasing pressure on the field of EDA. There is a constant need to improve optimization algorithms. However, finding and implementing such algorithms is a difficult task, especially with the novel logic primitives and potentially unconventional requirements of emerging technologies. In this paper, we cast logic optimization as a deterministic Markov decision process (MDP). We then take advantage of recent advances in deep reinforcement learning to build a system that learns how to navigate this process. Our design has a number of desirable properties. It is autonomous because it learns automatically and does not require human intervention. It generalizes to large functions after training on small examples. Additionally, it intrinsically supports both single- and multi-output functions, without the need to handle special cases. Finally, it is generic because the same algorithm can be used to achieve different optimization objectives, e.g., size and depth.
Mapping Affinities in Academic Organizations
Frontiers in Research Metrics and Analytics. 2018-02-19.
DOI : 10.3389/frma.2018.00004.
Scholarly affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not leave visible traces in information systems; for instance, some peers may share interests without actually knowing it. This article illustrates the development of a map of affinities for academic collectives, designed to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, hereinafter ENAC. The school consists of around 1,000 scholars, 70 laboratories, and 3 institutes. The actual affinities are modeled using the data available from the information systems reporting publications, teaching, and advising scholars, whereas the potential affinities are addressed through text mining of the publications. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information; they also apply at different scales. The map, thus, shows local affinities inside a given laboratory, as well as global affinities among laboratories. This article presents a graphical grammar to represent affinities. Its effectiveness is illustrated by two actualizations of the design proposal: an interactive online system in which the map can be parameterized, and a large-scale carpet of 250 square meters. In both cases, we discuss how the materiality influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and their consequences in terms of governance, the understanding of the scholars’ own positioning in the academic group in order to foster opportunities for new collaborations and, eventually, the interpretation of the structure from a general public to evaluate the relevance of the tool for external communication.
Negentropic linguistic evolution: A comparison of seven languages
2018. Digital Humanities 2018 , Mexico City, Mexico , June 26-29, 2018.
The relationship between the entropy of language and its complexity has been the subject of much speculation – some seeing the increase of linguistic entropy as a sign of linguistic complexification or interpreting entropy drop as a marker of greater regularity. Some evolutionary explanations, like the learning bottleneck hypothesis, argues that communication systems having more regular structures tend to have evolutionary advantages over more complex structures. Other structural effects of communication networks, like globalization of exchanges or algorithmic mediation, have been hypothesized to have a regularization effect on language. Longer-term studies are now possible thanks to the arrival of large-scale diachronic corpora, like newspaper archives or digitized libraries. However, simple analyses of such datasets are prone to misinterpretations due to significant variations of corpus size over the years and the indirect effect this can have on various measures of language change and linguistic complexity. In particular, it is important not to misinterpret the arrival of new words as an increase in complexity as this variation is intrinsical, as is the variation of corpus size. This paper is an attempt to conduct an unbiased diachronic study of linguistic complexity over seven different languages using the Google Books corpus. The paper uses a simple entropy measure on a closed, but nevertheless large, subset of words, called kernels. The kernel contains only the words that are present without interruption for the whole length of the study. This excludes all the words that arrived or disappeared during the period. We argue that this method is robust towards variations of corpus size and permits to study change in complexity despite possible (and in the case of Google Books unknown) change in the composition of the corpus. Indeed, the evolution observed on the seven different languages shows rather different patterns that are not directly correlated with the evolution of the size of the respective corpora. The rest of the paper presents the methods followed, the results obtained and the next steps we envision.
Deep Learning for Logic Optimization Algorithms
2018-01-01. IEEE International Symposium on Circuits and Systems (ISCAS) , Florence, ITALY , May 27-30, 2018.
The slowing down of Moore's law and the emergence of new technologies puts an increasing pressure on the field of EDA. There is a constant need to improve optimization algorithms. However, finding and implementing such algorithms is a difficult task, especially with the novel logic primitives and potentially unconventional requirements of emerging technologies. In this paper, we cast logic optimization as a deterministic Markov decision process (MDP). We then take advantage of recent advances in deep reinforcement learning to build a system that learns how to navigate this process. Our design has a number of desirable properties. It is autonomous because it learns automatically and does not require human intervention. It generalizes to large functions after training on small examples. Additionally, it intrinsically supports both single-and multioutput functions, without the need to handle special cases. Finally, it is generic because the same algorithm can be used to achieve different optimization objectives, e. g., size and depth.
Making large art historical photo archives searchable
Lausanne, EPFL, 2018.
DOI : 10.5075/epfl-thesis-8857.
In recent years, museums, archives and other cultural institutions have initiated important programs to digitize their collections. Millions of artefacts (paintings, engravings, drawings, ancient photographs) are now represented in digital photographic format. Furthermore, through progress in standardization, a growing portion of these images are now available online, in an easily accessible manner. This thesis studies how such large-scale art history collection can be made searchable using new deep learning approaches for processing and comparing images. It takes as a case study the processing of the photo archive of the Foundation Giorgio Cini, where more than 300'000 images have been digitized. We demonstrate how a generic processing pipeline can reliably extract the visual and textual content of scanned images, opening up ways to efficiently digitize large photo-collections. Then, by leveraging an annotated graph of visual connections, a metric is learnt that allows clustering and searching through artwork reproductions independently of their medium, effectively solving a difficult problem of cross-domain image search. Finally, the thesis studies how a complex Web Interface allows users to perform different searches based on this metric. We also evaluate the process by which users can annotate elements of interest during their navigation to be added to the database, allowing the system to be trained further and give better results. By documenting a complete approach on how to go from a physical photo-archive to a state-of-the-art navigation system, this thesis paves the way for a global search engine across the world's photo archives.
The Intellectual Organisation of History
Lausanne, EPFL, 2018.
DOI : 10.5075/epfl-thesis-8537.
A tradition of scholarship discusses the characteristics of different areas of knowledge, in particular after modern academia compartmentalized them into disciplines. The academic approach is often put to question: are there two or more cultures? Is an ever-increasing specialization the only way to cope with information abundance or are holistic approaches helpful too? What is happening with the digital turn? If these questions are well studied for the sciences, our understanding of how the humanities might differ in their own respect is far less advanced. In particular, modern academia might foster specific patterns of specialization in the humanities. Eventually, the recent rise in the application of digital methods to research, known as the digital humanities, might be introducing structural adaptations through the development of shared research technologies and the advent of organizational practices such as the laboratory. It therefore seems timely and urgent to map the intellectual organization of the humanities. This investigation depends on few traits such as the level of codification, the degree of agreement among scholars, the level of coordination of their efforts. These characteristics can be studied by measuring their influence on the outcomes of scientific communication. In particular, this thesis focuses on history as a discipline using bibliometric methods. In order to explore history in its complexity, an approach to create collaborative citation indexes in the humanities is proposed, resulting in a new dataset comprising monographs, journal articles and citations to primary sources. Historians' publications were found to organize thematically and chronologically, sharing a limited set of core sources across small communities. Core sources act in two ways with respect to the intellectual organization: locally, by adding connectivity within communities, or globally as weak ties across communities. Over recent decades, fragmentation is on the rise in the intellectual networks of historians, and a comparison across a variety of specialisms from the human, natural and mathematical sciences revealed the fragility of such networks across the axes of citation and textual similarities. Humanists organize into more, smaller and scattered topical communities than scientists. A characterisation of history is eventually proposed. Historians produce new historiographical knowledge with a focus on evidence or interpretation. The former aims at providing the community with an agreed-upon factual resource. Interpretive work is instead mainly focused on creating novel perspectives. A second axe refers to two modes of exploration of new ideas: in-breadth, where novelty relates to adding new, previously unknown pieces to the mosaic, or in-depth, if novelty then happens by improving on previous results. All combinations possible, historians tend to focus on in-breadth interpretations, with the immediate consequence that growth accentuates intellectual fragmentation in the absence of further consolidating factors such as theory or technologies. Research on evidence might have a different impact by potentially scaling-up in the digital space, and in so doing influence the modes of interpretation in turn. This process is not dissimilar to the gradual rise in importance of research technologies and collaborative competition in the mathematical and natural sciences. This is perhaps the promise of the digital humanities.
Mapping affinities: visualizing academic practice through collaboration
DOI : 10.5075/epfl-thesis-8242.
Academic affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not have visible traces in information systems; for instance, some peers may share scientific interests without actually knowing it. This thesis illustrates the development of a map of affinities for scientific collectives, which is intended to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, which consists of three institutes, seventy laboratories, and around one thousand employees. The actual affinities are modeled using the data available from the academic systems reporting publications, teaching, and advising, whereas the potential affinities are addressed through text mining of the documents registered in the information system. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information, they also apply at different scales. Therefore, the map shows local affinities inside a given laboratory, as well as global affinities among laboratories. The thesis presents a graphical grammar to represent affinities. This graphical system is actualized in several embodiments, among which a large-scale carpet of 250 square meters and an interactive online system in which the map can be parameterized. In both cases, we discuss how the actualization influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and the relative decisions, the understanding of the researchersâ own positioning in the academic collective that might reveal opportunities for new synergies, and eventually the interpretation of the structure from an external standpoint that suggesting the relevance of the tool for communication.
Layout Analysis on Newspaper Archives
2017. Digital Humanities 2017 , Montreal, Canada , August 8-11, 2017.
The study of newspaper layout evolution through historical corpora has been addressed by diverse qualitative and quantitative methods in the past few years. The recent availability of large corpora of newspapers is now making the quantitative analysis of layout evolution ever more popular. This research investigates a method for the automatic detection of layout evolution on scanned images with a factorial analysis approach. The notion of eigenpages is defined by analogy with eigenfaces used in face recognition processes. The corpus of scanned newspapers that was used contains 4 million press articles, covering about 200 years of archives. This method can automatically detect layout changes of a given newspaper over time, rebuilding a part of its past publishing strategy and retracing major changes in its history in terms of layout. Besides these advantages, it also makes it possible to compare several newspapers at the same time and therefore to compare the layout changes of multiple newspapers based only on scans of their issues.
Machine Vision Algorithms on Cadaster Plans
2017. Premiere Annual Conference of the International Alliance of Digital Humanities Organizations (DH 2017) , Montreal, Canada , August 8-11, 2017.
Cadaster plans are cornerstones for reconstructing dense representations of the history of the city. They provide information about the city urban shape, enabling to reconstruct footprints of most important urban components as well as information about the urban population and city functions. However, as some of these handwritten documents are more than 200 years old, the establishment of processing pipeline for interpreting them remains extremely challenging. We present the first implementation of a fully automated process capable of segmenting and interpreting Napoleonic Cadaster Maps of the Veneto Region dating from the beginning of the 19th century. Our system extracts the geometry of each of the drawn parcels, classifies, reads and interprets the handwritten labels.
Analyse multi-échelle de n-grammes sur 200 années d'archives de presse
Lausanne, EPFL, 2017.
DOI : 10.5075/epfl-thesis-8180.
The recent availability of large corpora of digitized texts over several centuries opens the way to new forms of studies on the evolution of languages. In this thesis, we study a corpus of 4 million press articles covering a period of 200 years. The thesis tries to measure the evolution of written French on this period at the level of words and expressions, but also in a more global way by attempting to define integrated measures of linguistic evolution. The methodological choice is to introduce a minimum of linguistic hypotheses in this study by developing new measures around the simple notion of n-gram, a sequence of n consecutive words. The thesis explores on this basis the potential of already known concepts as temporal frequency profiles and their diachronic correlations, but also introduces new abstractions such as the notion of resilient linguistic kernel or the decomposition of profiles into solidified expressions according to simple statistical models. Through the use of distributed computational techniques, it develops methods to test the relevance of these concepts on a large amount of textual data and thus allows to propose a virtual observatory of the diachronic evolutions associated with a given corpus. On this basis, the thesis explores more precisely the multi-scale dimension of linguistic phenomena by considering how standardized measures evolve when applied to increasingly long n-grams. The discrete and continuous scale from the isolated entities (n=1) to the increasingly complex and structured expressions (1 < n < 10) offers a transversal axis of study to the classical differentiations that ordinarily structure linguistics: syntax, semantics, pragmatics, and so on. The thesis explores the quantitative and qualitative diversity of phenomena at these different scales of language and develops a novel approach by proposing multi-scale measurements and formalizations, with the aim of characterizing more fundamental structural aspects of the studied phenomena.
Urban Development Processes: Methodological Investigation into the Complexity and Dynamics of Post-socialist Cities
Lausanne, EPFL, 2017.
DOI : 10.5075/epfl-thesis-7597.
The overall objective of this thesis is to critically address, break down and reassemble the urban development process in post-socialist cities through a set of analyses that covers urban planning strategies, real-estate interventions, participatory and urban design activities. The blurred and distorted morphology of cities worldwide requires dynamic solutions and calls for proper techniques that are spatially and temporally adjusted to local socio-spatial patterns. The theoretical foundations are built upon the ordinary cities approach. It provides a unique assemblage to examine how cities are intertwined with the world, without forgetting social, cultural and historical legacies inherent to each city. The urban development of post-socialist cities is thus perceived as a complex and dynamic system that incorporates discrepant layers of urban decision-making, tracks the level of urbanity through the fluctuating links between urban agency and socio-spatial patterns, and, in general, reveals the contextual processes of maintenance, transformation and change within an urban system. The case study of the Savamala neighbourhood, is not only a scaled example of the pre-socialist material legacy, a socialist cultural and regulatory matrix, and a transitional social reality, but equally the condensed illustration of the multi-faceted circumstances of post-socialist urban development. These elements constitute the exploratory boundaries for research pertaining to a decision-making chain, urban agency networks, and to socio-spatial patterns identified in Savamala through a qualitative data collection. The methodological framework is based on a process-driven, correlational research design that blends two methods. The Actor-Network Theory creates bottom-up logical argumentation to describe urban complexity. The Multi-Agent System serves to track urban dynamics and thus outline the action framework of the research. Data analysis, triangulation and reduction rely on complex actor roles and synthesized networks, the contextualization of interests and interventions, and the distribution of urban system transitions. The combination of overlapping methods and the visualization of data in diagrams deconstructs long-term historical processes. The research findings are three-folded. Firstly, they shed light on the actors and the processes at play in Savamala, unveiling a disbalanced conglomerate of sectors: political (power-mongers), economic (profit-seekers), professional (technicians and apparatchiks), civil and cultural (on the go). The articulation of urban agency in Savamala fundamentally confirms an authoritarian distribution of roles and decisions. The empirical results of the study contribute to the operationalization of several theoretical concepts for practical investigations in ordinary cities around the world. While urban development is interpreted in terms of contextualized urban system transitions, an overarching demarcation of the level of urbanity captures the fluctuations of local socio-spatial capital. The complexity of urban actors, forces, artefacts, and the dynamics of urban networks, interrelations, processes, are depicted as a legible, data-loaded scheme of nodes and links when data is visualised through the MAS-ANT methodological approach. This research furnishes a response to the necessity of altering a deterministic concept of urban research in terms of finding an intermediary between empirical data and their graphical display.
A Simple Set of Rules for Characters and Place Recognition in French Novels
Frontiers in Digital Humanities. 2017.
DOI : 10.3389/fdigh.2017.00006.
Big Data of the Past
Frontiers in Digital Humanities. 2017.
DOI : 10.3389/fdigh.2017.00012.
Big Data is not a new phenomenon. History is punctuated by regimes of data acceleration, characterized by feelings of information overload accompanied by periods of social transformation and the invention of new technologies. During these moments, private organizations, administrative powers, and sometimes isolated individuals have produced important datasets, organized following a logic that is often subsequently superseded but was at the time, nevertheless, coherent. To be translated into relevant sources of information about our past, these document series need to be redocumented using contemporary paradigms. The intellectual, methodological, and technological challenges linked to this translation process are the central subject of this article.
Narrative Recomposition in the Context of Digital Reading
Lausanne, EPFL, 2017.
DOI : 10.5075/epfl-thesis-7592.
In any creative process, the tools one uses have an immediate influence on the shape of the final artwork. However, while the digital revolution has redefined core values in most creative domains over the last few decades, its impact on literature remains limited. This thesis explores the relevance of digital tools for several aspects of novels writing by focusing on two research questions: Is it possible for an author to edit better novels out of already published ones, given the access to adapted tools? And, will authors change their way of writing when they know how they are being read? This thesis is a multidisciplinary participatory study, actively involving the Swiss novelist Daniel de Roulet, to construct measures, visualizations, and digital tools aimed at leveraging the process of dynamic reordering of narrative material, similar to how one edits a video footage. We developed and tested various text analysis and visualization tools, the results of which were interpreted and used by the author to recompose a family saga out of material he has been writing for twenty-four years. Based on this research, we released Saga+, an online editing, publishing, and reading tool. The platform was handed out to third parties to improve existing writings, making new novels available to the public as a result. While many researchers have studied the structuration of texts either through global statistical features or micro-syntactic analyses, we demonstrate that by allowing visualization and interaction at an intermediary level of organisation, authors can manipulate their own texts in agile ways. By integrating readersâ traces into this newly revealed structure, authors can start to approach the question of optimizing their writing processes in ways that are similar to what is being practiced in other media industries. The introduction of tools for optimal composition opens new avenues for authors, as well as a controversial debate regarding the future of literature.
Optimized scripting in Massive Open Online Courses
Dariah Teach, Université de Lausanne, Switzerland, March 23-24, 2017.
The Time Machine MOOC, currently under preparation, is designed to provide the necessary knowledge for students to use the editing tool of the Time Machine platform. The first test case of the platform in centered on our current work on the City of Venice and its archives. Small Teaching modules focus on specific skills of increasing difficulty: segmenting a word on a page, transcribing a word from a document series, georeferencing ancient maps using homologous points, disambiguating named entities, redrawing urban structures, finding matching details between paintings and writing scripts that perform automatically some of these tasks. Other skills include actions in the physical world, like scanning pages, books, maps or performing a photogrammetric reconstruction of a sculpture taking a large number of pictures. Eventually, some other modules are dedicated to general historic, linguistic, technical or archival knowledge that constitute prerequisites for mastering specific tasks. A general dependency graph has been designed, specifying in which order the skills can be acquired. The performance of most tasks can be tested using some pre-defined exercises and evaluation metrics, which allows for a precise evaluation of the level of mastery of each student. When the student successfully passes the test related to a skill, he or she gets the credentials to use that specific tool in the platform and starts contributing. However, the teaching options can vary greatly for each skill. Building upon the script concept developed by Dillenbourg and colleagues, we designed each tutorial as a parameterized sequence. A simple gradient descent method is used to progressively optimize the parameters in order to maximize the success rate of the students at the skill tests and therefore seek a form of optimality among the various design choices for the teaching methods. Thus, the more students use the platform, the more efficient teaching scripts become.
The references of references: a method to enrich humanities library catalogs with citation data
International Journal on Digital Libraries. 2017.
DOI : 10.1007/s00799-017-0210-1.
The advent of large-scale citation indexes has greatly impacted the retrieval of scientific information in several domains of research. The humanities have largely remained outside of this shift, despite their increasing reliance on digital means for information seeking. Given that publications in the humanities have a longer than average life-span, mainly due to the importance of monographs for the field, this article proposes to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. Reference monographs are works considered to be of particular importance in a research library setting, and likely to possess characteristic citation patterns. The article shows how to select a corpus of reference monographs, and proposes a pipeline to extract the network of publications they refer to. Results using a set of reference monographs in the domain of the history of Venice show that only 7% of extracted citations are made to publications already within the initial seed. Furthermore, the resulting citation network suggests the presence of a core set of works in the domain, cited more frequently than average.
Studying Linguistic Changes over 200 Years of Newspapers through Resilient Words Analysis
Frontiers in Digital Humanities. 2017.
DOI : 10.3389/fdigh.2017.00002.
This paper presents a methodology to analyze linguistic changes in a given textual corpus allowing to overcome two common problems related to corpus linguistics studies. One of these issues is the monotonic increase of the corpus size with time, and the other one is the presence of noise in the textual data. In addition, our method allows to better target the linguistic evolution of the corpus, instead of other aspects like noise fluctuation or topics evolution. A corpus formed by two newspapers “La Gazette de Lausanne” and “Le Journal de Genève” is used, providing 4 million articles from 200 years of archives. We first perform some classical measurements on this corpus in order to provide indicators and visualizations of linguistic evolution. We then define the concept of a lexical kernel and word resilience, to face the two challenges of noises and corpus size fluctuations. This paper ends with a discussion based on the comparison of results from linguistic change analysis and concludes with possible future works continuing in that direction.
From Documents to Structured Data: First Milestones of the Garzoni Project
Led by an interdisciplinary consortium, the Garzoni project undertakes the study of apprenticeship, work and society in early modern Venice by focusing on a specific archival source, namely the Accordi dei Garzoni from the Venetian State Archives. The project revolves around two main phases with, in the first instance, the design and the development of tools to extract and render information contained in the documents (according to Semantic Web standards) and, as a second step, the examination of such information. This paper outlines the main progress and achievements during the first year of the project.
Ancient administrative handwritten documents: virtual x-ray reading
A method for detecting ink writings in a specimen comprising stacked pages, allowing a page-by-page reading without turning pages The method compris- es steps of taking a set of projection x-ray images for different positions of the specimen with respect to an x-ray source and a detector from an apparatus for taking projection x-ray images; storing the set of projection x-ray images in a suitable computer system; and processing the set of projection x-ray images to tomographically reconstruct the shape of the specimen.
Rendre le passé présent
Forum des 100, Université de Lausanne, Switzerland, Mai, 2016.
La conception d’un espace à quatre dimensions, dont la navigation agile, permet de réintroduire une continuité fluide entre le présent et le passé, s’inscrit dans l’ancien rêve philosophico-technologique de la machine à remonter le temps. Le moment historique auquel nous sommes convié s’inscrit comme la continuité d’un long processus ou fiction, technologie, science et culture se mêlent. La machine à remonter le temps est cet horizon toujours discuté, progressivement approché, et, aujourd’hui peut-être pour la première fois atteignable.
La modélisation du temps dans les Digital Humanities
Regimes temporels et sciences historiques, Bern, October, 14, 2016.
Les interfaces numériques sont chaque jour optimisées pour proposer des navigations sans frictions dans les multiples dimensions du présent. C’est cette fluidité, caractéristique de ce nouveau rapport à l’enregistrement documentaire, que les Digital Humanities pourraient réussir à reintroduire dans l’exploration du passé. Un simple bouton devrait nous permettre de glisser d’une représentation du présent à la représentation du même référent, il y a 10, 100 ou 1000 ans. Idéalement, les interfaces permettant la navigation dans le temps devraient pouvoir offrir la même agilité d’action que celle nous permettent de zoomer et des zoomer sur des objets aussi larges et denses que le globe terrestre. La recherche textuelle, nouvelle porte d’entrée de la connaissance depuis le le XXIe siècle devrait pouvoir s’étendre avec la même simplicité aux contenus des documents du passé. La recherche visuelle, second grand moment de l’indexation du monde et dont les premiers résultats commencent à s’inviter sur la quotidienneté de nos pratiques numériques, pourrait être la clé de voute de l’accès aux milliards de documents qu’il nous faut maintenant rendre accessible sous format numérique. Pour rendre le passé présent, il faudrait le restructurer selon les logiques des structures de la société numérique. Que deviendrait le temps dans cette transformation ? Une simple nouvelle dimension de l’espace ? La réponse est peut-être plus subtile.
L’Europe doit construire la première Time Machine
Le projet Time Machine, en compétition dans la course pour les nouveaux FET Flagships, propose une infrastructure d’archivage et de calcul unique pour structurer, analyser et modéliser les données du passé, les réaligner sur le présent et permettre de se projeter vers l’avenir. Il est soutenu par 70 institutions provenant de 20 pays et par 13 programmes internationaux.
Visual Link Retrieval in a Database of Paintings
2016. VISART Workshop, ECCV , Amsterdam , September, 2016.
DOI : 10.1007/978-3-319-46604-0_52.
This paper examines how far state-of-the-art machine vision algorithms can be used to retrieve common visual patterns shared by series of paintings. The research of such visual patterns, central to Art History Research, is challenging because of the diversity of similarity criteria that could relevantly demonstrate genealogical links. We design a methodology and a tool to annotate efficiently clusters of similar paintings and test various algorithms in a retrieval task. We show that pretrained convolutional neural network can perform better for this task than other machine vision methods aimed at photograph analysis. We also show that retrieval performance can be significantly improved by fine-tuning a network specifically for this task.
Diachronic Evaluation of NER Systems on Old Newspapers
2016. 13th Conference on Natural Language Processing (KONVENS 2016)Conference on Natural Language Processing , Bochum, GermanyBochum, Germany , September 19-21, 2016September 19–21, 2016. p. 97-107.
In recent years, many cultural institutions have engaged in large-scale newspaper digitization projects and large amounts of historical texts are being acquired (via transcription or OCRization). Beyond document preservation, the next step consists in providing an enhanced access to the content of these digital resources. In this regard, the processing of units which act as referential anchors, namely named entities (NE), is of particular importance. Yet, the application of standard NE tools to historical texts faces several challenges and performances are often not as good as on contemporary documents. This paper investigates the performances of different NE recognition tools applied on old newspapers by conducting a diachronic evaluation over 7 time-series taken from the archives of Swiss newspaper Le Temps.
Lausanne: EPFL PRESS.
Wikipedia has become the principle gateway to knowledge on the web. The doubts about information quality and the rigor of its collective negotiation process during its first couple of years have proved unfounded. Whether this delights or horrifies us, Wikipedia has become part of our lives. Both flexible in its form and content, the online encyclopedia will continue to constitute one of the pillars of digital culture for decades to come. It is time to go beyond prejudices and to study its true nature and better understand the emergence of this “miracle.”
Le miracle Wikipédia
Lausanne: Presses Polytechniques et Universitaires Romandes.
Wikipédia s’est imposée comme la porte d’entrée principale de la connaissance sur le web. Les débats de ses premières années concernant la qualité des informations produites ou le bien-fondé du processus de négociation collective sont aujourd’hui dépassés. Que l’on s’en réjouisse ou qu’on le déplore, Wikipédia fait maintenant partie de notre vie. Flexible à la fois dans sa forme et dans ses contenus, l’encyclopédie en ligne continuera sans doute de constituer un des piliers de la culture numérique lors des prochaines décennies. Au-delà des préjugés, il s’agit maintenant d’étudier sa véritable nature et de comprendre à rebours comment un tel « miracle » a pu se produire.
La culture internet des mèmes
Lausanne: Presses Polytechniques et Universitaires Romandes.
Nous sommes à un moment de transition dans l’histoire des médias. Sur Internet, des millions de personnes produisent, altèrent et relaient des « mèmes », contenus numériques aux motifs stéréotypés. Cette « culture » offre un paysage nouveau, riche et complexe à étudier. Pour la première fois, un phénomène à la fois global et local, populaire et, d’une certaine manière, élitiste, construit, « médié » et structuré par la technique, peut être observé avec précision. Étudier les mèmes, c’est non seulement comprendre ce qu’est et sera peut-être la culture numérique, mais aussi inventer une nouvelle approche permettant de saisir la complexité des circulations de motifs à l’échelle mondiale.
Visual Patterns Discovery in Large Databases of Paintings
2016. Digital Humanities 2016 , Krakow, Polland , July 11-16, 2016.
The digitization of large databases of works of arts photographs opens new avenue for research in art history. For instance, collecting and analyzing painting representations beyond the relatively small number of commonly accessible works was previously extremely challenging. In the coming years,researchers are likely to have an easier access not only to representations of paintings from museums archives but also from private collections, fine arts auction houses, art historian However, the access to large online database is in itself not sufficient. There is a need for efficient search engines, capable of searching painting representations not only on the basis of textual metadata but also directly through visual queries. In this paper we explore how convolutional neural network descriptors can be used in combination with algebraic queries to express powerful search queries in the context of art history research.
Visualizing Complex Organizations with Data
IC Research Day, Lausanne, Switzerland, June 30, 2016.
The Affinity Map is a project founded by the ENAC whose aim is to provide an instrument to understand organizations. The photograph shows the disclosure of the first map for the ENAC Research Day. The visualization was presented to scholars who are displayed in the representation itself.
Navigating through 200 years of historical newspapers
2016. iPRES 2016 , Bern , October 3-6, 2016.
This paper aims to describe and explain the processes behind the creation of a digital library composed of two Swiss newspapers, namely Gazette de Lausanne (1798-1998) and Journal de Genève (1826-1998), covering an almost two-century period. We developed a general purpose application giving access to this cultural heritage asset; a large variety of users (e.g. historians, journalists, linguists and the general public) can search through the content of around 4 million articles via an innovative interface. Moreover, users are offered different strategies to navigate through the collection: lexical and temporal lookup, n-gram viewer and named entities.
Lausanne, EPFL, 2016.
DOI : 10.5075/epfl-thesis-6954.
The thesis proposes a projective theory for contemporary urbanism that equates the active processes of the city and a new orientation for procedural urban design within a single line of thought that delineates the concept of a generic organizationâone that derives from unitary operations acting in series, forming heterogeneous assemblages, but resisting the tendency to totalizing systematization. Such a description characterizes the urban condition as an open phenomenon, such as that described by theorists of assemblage urbanism. This thesis extends this work from an analytical or empirical theory to an operative one that can be applied to design practice by combining it with an ontology of encapsulation and object-based agency. I will argue that computational processes are unique in the way that they enable urban design to operate in the same manner as the city with regard to enaction and representation while drawing attention to the rhetorical dimension of interactive systems like urbanism and procedural models. These parallels are further explored through a series of themes that bridge between urban studies and urban design and that connect to historical concerns in computation and urban design. The four themesâInteractive, Generative, Reflexive, Entropicâcoalesce around a coherent, integrated theoretical schema. Within this framework, I also argue for an increasingly involved role for architecture as an active agent in the urban design process. Illustrations of how this might occur (as functional code and software screenshots) are presented alongside the arguments to underscore the fact that the material basis of the computational model is as significant a determinant of design practice as the material realities of the city are to the urban experience. Finally, these lessons are imported back into urbanism with architecture serving as an interface to the city. Procedural engagement allows architecture to participate in urbanism through an inextricably and mutually contingent interaction. In cases where this occurs, the city continuously prompts architecture to carry out new inquiries into the changing potential of the urban situation, without providing a terminal condition. Rather, by allowing the the outcome of the situation to remain undecided, both urbanism and computational modeling can be seen to offer the same productive irreality and alterity: an urban generic.
Studying Linguistic Changes on 200 Years of Newspapers
2016. Digital Humanities 2016 , Kraków, Poland , July 11-16, 2016.
Large databases of scanned newspapers open new avenues for studying linguistic evolution. By studying a two-billion-word corpus corresponding to 200 years of newspapers, we compare several methods in order to assess how fast language is changing. After critically evaluating an initial set of methods for assessing textual distance between subsets corresponding to consecutive years, we introduce the notion of a lexical kernel, the set of unique words that maintain themselves over long periods of time. Focusing on linguistic stability instead of linguistic change allows building more robust measures to assess long term phenomena such as word resilience. By systematically comparing the results obtained on two subsets of the corpus corresponding to two independent newspapers, we argue that the results obtained are independent of the specificity of the chosen corpus, and are likely to be the results of more general linguistic phenomena.
The References of References: Enriching Library Catalogs via Domain-Specific Reference Mining
2016. 3rd International Workshop on Bibliometric-enhanced Information Retrieval (BIR2016) , Padua, Italy , March 20-23, 2016. p. 32-43.
The advent of large-scale citation services has greatly impacted the retrieval of scientific information for several domains of research. The Humanities have largely remained outside of this shift despite their increasing reliance on digital means for information seeking. Given that publications in the Humanities probably have a longer than average life-span, mainly due to the importance of monographs in the field, we propose to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. We exemplify our approach using a corpus of reference monographs on the history of Venice and extracting the network of publications they refer to. Preliminary results show that on average only 7% of extracted references are made to publications already within such corpus, therefore suggesting that reference monographs are effective hubs for the retrieval of further resources within the domain.
S'affranchir des automatismes
Fabuleuses mutations, Cité des Sciences, December 8, 2015.
The Venice Time Machine
2015. ACM Symposium on Document Engineering , Lausanne, Switzerland , September 08 - 11, 2015.
The Venice Time Machine is an international scientific programme launched by the EPFL and the University Ca’Foscari of Venice with the generous support of the Fondation Lombard Odier. It aims at building a multidimensional model of Venice and its evolution covering a period of more than 1000 years. The project ambitions to reconstruct a large open access database that could be used for research and education. Thanks to a parternship with the Archivio di Stato in Venice, kilometers of archives are currently digitized, transcribed and indexed setting the base of the largest database ever created on Venetian documents. The State Archives of Venice contain a massive amount of hand-written documentation in languages evolving from medieval times to the 20th century. An estimated 80 km of shelves are filled with over a thousand years of administrative documents, from birth registrations, death certificates and tax statements, all the way to maps and urban planning designs. These documents are often very delicate and are occasionally in a fragile state of conservation. In complementary to these primary sources, the content of thousands of monographies have been indexed and made searchable.
Venice Time Machine : Recreating the density of the past
2015. Digital Humanities 2015 , Sydney , June 29 - July 3, 2015.
This article discusses the methodology used in the Venice Time Machine project (http://vtm.epfl.ch) to reconstruct a historical geographical information system covering the social and urban evolution of Venice over a period of 1,000 years. Given the time span considered, the project used a combination of sources and a specific approach to align heterogeneous historical evidence into a single geographic database. The project is based on a mass digitization project of one of the largest archives in Venice, the Archivio di Stato. One goal of the project is to build a kind of ‘Google map’ of the past, presenting a hypothetical reconstruction of Venice in 2D and 3D for any year starting from the origins of the city to present-day Venice.
On Mining Citations to Primary and Secondary Sources in Historiography
2015. Clic-IT 2015 , Trento, Italy , December 3-4, 2015.
We present preliminary results from the Linked Books project, which aims at analysing citations from the histo- riography on Venice. A preliminary goal is to extract and parse citations from any location in the text, especially footnotes, both to primary and secondary sources. We detail a pipeline for these tasks based on a set of classifiers, and test it on the Archivio Veneto, a journal in the domain.
Latent Social Information in Group Interactions with a Shared Workspace
Lausanne, EPFL, 2015.
DOI : 10.5075/epfl-thesis-6616.
Shared artifacts, such as drawings and schemas on whiteboards, sticky-notes with ideas on walls, are often created and interacted with during meetings. These shared artifacts a) facilitate the expression of complex fleeting ideas, b) enable collaborators to establish a common ground and validate each othersâ understanding about the context, and c) extend the validity of shared information by making it permanent. By the end of a collaboration session, the shared content denotes the shared knowledge amongst collaborators, which emerged as a result of a recursive process of storage, transformation, and retrieval from an external memory such as a whiteboard. Although these interactions with the artifacts symbolize the important episodes in a group discussion, still the information contained within them has not been much leveraged in collaboration research. Being well assimilated in the established work culture, collaborators do not heed the interactions with the shared artifacts, and therefore the nature of the social information contained within them is latent. However, from a research perspective this information is valuable and can offer insights into a few facets of ongoing group dynamics and processes. This thesis in particular a) identifies and examines the characteristics of the latent social information, b) studies the relationship of this information with different aspects of collaboration, and c) explores the practical utility of this information in collaboration assessment. We start by designing a meeting technology - MeetHub that enables collaborators to share and interact with artifacts in an unconstrained manner over a shared workspace, and allow us to collect fine-grained interactional information. Then we present user studies, where we extract and comprehend the relevant social information from interactions with the artifacts, and analyze its relationship with collaborative processes. Our findings demonstrate that latent social information is significantly correlated with the task outcome, division-of-labor, and the quality of mutual understanding between collaborators. Finally, we present a prediction system based on this social information, capable of alerting the group members about the poorly grounded episodes in real-time, and thus enabling them to regulate their collaborative behavior. The final contribution of this work presents itself as implications towards the dual nature of shared workspace, supporting the creation and sharing of artifacts as well as an assessment tool.
Text Line Detection and Transcription Alignment: A Case Study on the Statuti del Doge Tiepolo
2015. Digital humanities , Sydney, Australia , June 29 - July 3, 2015.
In this paper, we propose a fully automatic system for the transcription alignment of historical documents. We introduce the ‘Statuti del Doge Tiepolo’ data that include images as well as transcription from the 14th century written in Gothic script. Our transcription alignment system is based on forced alignment technique and character Hidden Markov Models and is able to efficiently align complete document pages.
Anatomy of a Drop-Off Reading Curve
2015. DH2015 , Sydney, Australia , June 29 - July 3, 2015.
Not all readers finish the book they start to read. Electronic media allow to us to measure more precisely how this “drop-off” effect unfolds as readers are reading a book. A curve showing how many people have read each chapter of a book is likely to be progressively going down as part of them interrupt their reading “journey”. This article is an initial study about the shape of these “dropoff” reading curves.
Inversed N-gram viewer: Searching the space of word temporal profiles
2015. Digital Humanities 2015 , Sydney, Australia , 29 June–3 July 2015.
Quelques réflexions préliminaires sur la Venice Time Machine
L'archive dans quinze ans; Louvain-la-Neuve: Academia, 2015. p. 161--179.
Encore aujourd’hui la plupart des historiens ont l’habitude de travailler en toutes petites équipes, se focalisant sur des problématiques très spécifiques. Ils n’échangent que très rarement leurs notes ou leurs données, percevant à tort ou à raison que leurs travaux de recherche préparatoire sont à la base de l’originalité de leurs travaux futurs. Prendre conscience de la dimension et la densité informationnelle des archives comme celle de Venise doit nous faire réaliser de l’impossibilité pour quelques historiens, travaillant de manière non coordonnée de couvrir avec une quelconque systématicité un objet aussi vaste. Si nous voulons tenter de transformer une archive de 80 kilomètres couvrant mille ans d’histoire en un système d’information structuré il nous faut développer un programme scientifique collaboratif, coordonné et massif. Nous sommes devant une entité informationnelle trop grande. Seule une collaboration scientifique internationale peut tenter d’en venir à bout.
A Map for Big Data Research in Digital Humanities
Frontiers in Digital Humanities. 2015.
DOI : 10.3389/fdigh.2015.00001.
This article is an attempt to represent Big Data research in digital humanities as a structured research field. A division in three concentric areas of study is presented. Challenges in the first circle – focusing on the processing and interpretations of large cultural datasets – can be organized linearly following the data processing pipeline. Challenges in the second circle – concerning digital culture at large – can be structured around the different relations linking massive datasets, large communities, collective discourses, global actors, and the software medium. Challenges in the third circle – dealing with the experience of big data – can be described within a continuous space of possible interfaces organized around three poles: immersion, abstraction, and language. By identifying research challenges in all these domains, the article illustrates how this initial cartography could be helpful to organize the exploration of the various dimensions of Big Data Digital Humanities research.
Mapping the Early Modern News Flow: An Enquiry by Robust Text Reuse Detection
2015. HistoInformatics 2014 . p. 244-253.
DOI : 10.1007/978-3-319-15168-7_31.
Early modern printed gazettes relied on a system of news exchange and text reuse largely based on handwritten sources. The reconstruction of this information exchange system is possible by detecting reused texts. We present a method to individuate text borrowings within noisy OCRed texts from printed gazettes based on string kernels and local text alignment. We apply our methods on a corpus of Italian gazettes for the year 1648. Beside unveiling substantial overlaps in news sources, we are able to assess the editorial policy of different gazettes and account for a multi-faceted system of text reuse.
X-ray spectrometry and imaging for ancient administrative handwritten documents
X-Ray Spectrometry. 2015.
DOI : 10.1002/xrs.2581.
‘Venice Time Machine’ is an international program whose objective is transforming the ‘Archivio di Stato’ – 80 km of archival records documenting every aspect of 1000 years of Venetian history – into an open-access digital information bank. Our study is part of this project: We are exploring new, faster, and safer ways to digitalize manuscripts, without opening them, using X-ray tomography. A fundamental issue is the chemistry of the inks used for administrative documents: Contrary to pieces of high artistic or historical value, for such items, the composition is scarcely documented. We used X-ray fluorescence to investigate the inks of four Italian ordinary handwritten documents from the 15th to the 17th century. The results were correlated to X-ray images acquired with different techniques. In most cases, iron detected in the ‘iron gall’ inks produces image absorption contrast suitable for tomography reconstruction, allowing computer extraction of handwriting information from sets of projections. When absorption is too low, differential phase contrast imaging can reveal the characters from the substrate morphology
Ancient administrative handwritten documents: X-ray analysis and imaging
Journal of Synchrotron Radiation. 2015.
DOI : 10.1107/S1600577515000314.
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project
Il pleut des chats et des chiens: Google et l'impérialisme linguistique
Le monde diplomatique. 2015.
Au début du mois de décembre dernier, quiconque demandait à Google Traduction l’équivalent italien de l’expression « Cette fille est jolie » obtenait une proposition étrange : Questa ragazza è abbastanza, littéralement « Cette fille est assez ». La beauté s’était lost in translation — perdue en cours de traduction. Comment un des traducteurs automatiques les plus performants du monde, fort d’un capital linguistique unique constitué de milliards de phrases, peut-il commettre une erreur aussi grossière ? La réponse est simple : il passe par l’anglais. « Jolie » peut se traduire par pretty, qui signifie à la fois « joli » et « assez ». Le second sens correspond à l’italien abbastanza.
L'historien et l'algorithme
Le Temps des Humanités Digitales; FYP Editions, 2014. p. 49--63.
Les relations houleuses qu’histoire et informatique entretiennent ne sont pas nouvelles et la révolution des sciences historiques annoncée depuis plusieurs décennies continue de se faire attendre. Dans ce chapitre, nous aimerions néanmoins tenter de montrer qu’une évolution inédite est aujourd’hui à l’oeuvre dans les sciences historiques et que cette transformation est différente de celle qui a caractérisé, il y a quelques décennies l’arrivée de la « cliométrie » et des méthodes quantitatives. Notre hypothèse est que nous assistons par les effets de deux processus complémentaires à une généralisation des algorithmes comme objets médiateurs de la connaissance historique.
X-ray Spectrometry and imaging for ancient handwritten document
2014. European Conference on X-Ray Spectrometry, EXRS2014 , Bologna .
We detected handwritten characters in ancient documents from several centuries with different synchrotron x-ray imaging techniques. The results were correlated to those of x-ray fluorescence analysis. In most cases, heavy elements produced high image quality suitable for tomography reconstruction leading to virtual page-by-page “reading”. When absorption is too low, differential phase contrast (DPC) imaging can reveal the characters from the substrate morphology. This paves the way to new strategies for information harvesting during mass digitization programs. This study is part of the Venice Time Machine project, an international research program aiming at transforming the immense venetian archival records into an open access digital information system. The Archivio di Stato in Venice holds about 80 kms of archival records documenting every aspects of a 1000 years of Venetian history. A large part of these records take the form of ancient bounded registers that can only be digitize through cautious manual operations. Each page must be turned manually in order to be photographed. Our project explore new ways to virtually “read” manuscripts, without opening them,. We specifically plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the manuscripts, reducing the risk of damage and speeding up the process. The present tests demonstrate that the approach is feasible. Furthermore, they show that over a very long period of time the common recipes used in Europe for inks in “normal” handwritings - ship records, notary papers, commercial transactions, demographic accounts, etc. – very often produced a high concentration of heavy or medium-heavy elements such as Fe, Hg and Ca. This opens the way in general to x-ray analysis and imaging. Furthermore, it could lead to a better understanding of the deterioration mechanisms in the search for remedies. The most important among the results that we will present is tomographic reconstruction. We simulated books with stacks of manuscript fragments and obtained from sets of projection images individual views -- that correspond indeed to a virtual page-by-page “reading” without opening the volume.
Virtual X-ray Reading (VXR) of Ancient Administrative Handwritten Documents
2014. Synchrotron Radiation in Art and Archaeology, SR2A 14 .
The study of ancient documents is too often confined to specimens of high artistic value or to official writings. Yet, a wealth of information is often stored in administrative records such as ship records, notary papers, work contract, tax declaration, commercial transactions or demographic accounts. One of the best examples is the Venice Time Machine project that targets a massive digitization and information extraction program of Venetian archives. The Archivio di Stato in Venice holds about 80 kms of archival documents spanning over ten centuries and documenting every aspect of Venetian Mediterranean Empire. If unlocked and transformed in a digital information system, this information could change significantly our understanding of European history. We are exploring new ways to facilitate and speed up this broad task, exploiting x-ray techniques, notably those based on synchrotron light. . Specifically, we plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the bounded administrative registers, reducing the risk of damage and accelerating the process. We present here positive tests of this approach. First, we systematically analyzed the ink composition of a sample of Italian handwritings spanning over several centuries. Then, we performed x-ray imaging with different contrast mechanisms (absorption, scattering and refraction) using the differential phase contrast (DPC) mode of the TOMCAT beamline of the Swiss Light Source (SLS). Finally, we selected cases of high contrast to perform tomographic reconstruction and demonstrate page-by-page handwriting recognition. The experiments concerned both black inks from different centuries and red ink from the 15th century. For the majority of the specimens, we found in the ink areas heavy or medium-heavy elements such as Fe, Ca, Hg, Cu and Zn. This eliminates a major question about our approach, since the documentation on the nature of inks for ancient administrative records is quite scarce. As a byproduct, the approach can produce valuable information on the ink-substrate interaction with the objective to understand and prevent corrosion and deterioration.
La simulation humaine : le roman-fleuve comme terrain d'expérimentation narrative
Cahiers de Narratologie. 2014.
Dans cet article nous présentons la démarche et les premiers résultats d’une recherche participative menée conjointement par le laboratoire d’humanités digitales de l’EPFL (DHLAB) et l’écrivain suisse Daniel de Roulet. Dans cette étude, nous explorons les façons dont la lecture numérique est susceptible d’influencer la façon d’écrire et de réorganiser des récits complexes, de type roman-fleuve ou saga. Nous exposons également nos premières conclusions ainsi que les possibles travaux futures, dans ce domaine très vaste et peu étudié à ce jour.
Character Networks and Centrality
University of Lausanne, 2014.
A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index.
Encoding metaknowledge for historical databases
2014. Digital Humanities 2014 , Lausanne, Switzerland , July 7-12, 2014. p. 288-289.
Historical knowledge is fundamentally uncertain. A given account of an historical event is typically based on a series of sources and on sequences of interpretation and reasoning based on these sources. Generally, the product of this historical research takes the form of a synthesis, like a narrative or a map, but does not give a precise account of the intellectual process that led to this result. Our project consists of developing a methodology, based on semantic web technologies, to encode historical knowledge, while documenting, in detail, the intellectual sequences linking the historical sources with a given encoding, also know as paradata. More generally, the aim of this methodology is to build systems capable of representing multiple historical realities, as they are used to document the underlying processes in the construction of possible knowledge spaces.
La question de la langue à l'époque de Google
Digital Studies Organologie des savoirs et technologies de la connaissance; Limoge: Fyp, 2014.
En 2012, Google a réalisé un chiffre d’affaires de 50 milliards de dollars un résultat financier impressionnant pour une entreprise créée il y a seulement une quinzaine d’années. 50 milliards de dollars représentent 140 millions de dollars par jour, 5 millions de dollars par heure. Si vous lisez ce chapitre en une dizaine de minutes, Google aura, entre temps, réalisé presque un million de dollars de revenu. Que vend Google pour réaliser des performances financières si impressionnantes ? Google vend des mots, des millions de mots.
Fantasmagories au musée
L'utilisation de plus en plus prégnante des nouvelles technologies dans les musées et bibliothèques (tablettes tactiles, audioguides, écrans interactifs, etc.) diviserait les publics entre ceux qui recherchent la compréhension et ceux pour qui prime l'émotion. Comment alors concilier expérience collective partagée et dispositifs techniques ? Comment des cartels virtuels flottant dans les airs peuvent devenir des "fantasmagories didactiques" ? Retour d'expérience muséographique de réalité mixte autour de l'utilisation de vitrines virtuelles "holographiques".
A Preparatory Analysis of Peer-Grading for a Digital Humanities MOOC
2014. Digital Humanities 2014 , Lausanne , 7-12 July. p. 227-229.
Over the last two years, Massive Open Online Classes (MOOCs) have been unexpectedly successful in convincing large number of students to pursue online courses in a variety of domains. Contrary to the "learn anytime anywhere" moto, this new generation of courses are based on regular assignments that must be completed and corrected on a fixed schedule. Successful courses attracted about 50 000 students in the first week but typically stabilised around 10 000 in the following weeks, as most courses demand significant involvement. With 10 000 students, grading is obviously an issue, and the first successful courses tended to be technical, typically in computer science, where various options for automatic grading system could be envisioned. However, this posed a challenge for humanities courses. The solution that has been investigated for dealing with this issue is peer-grading: having students grade the work of one another. The intuition that this would work was based on some older results showing high correlation between professor grading, peer-grading and self-grading. The generality of this correlation can reasonably be questioned. There is a high chance that peer-grading works for certain domains, or for certain assignment, but not for others. Ideally this should be tested experimentally before launching any large-scale courses. EPFL is one of the first European schools to experiment with MOOCs in various domains. Since the launch of these first courses, preparing an introductory MOOC on Digital Humanities was one of our top priorities. However, we felt it was important to first validate the kind of peer-grading strategy we were planning to implement on a smaller set of students, to determine if it would actually work for the assignments we envisioned. This motivated the present study which was conducted during the first semester of our masters level introductory course on Digital Humanities at EPFL.
Linguistic Capitalism and Algorithmic Mediation
Google’s highly successful business model is based on selling words that appear in search queries. Organizing several million auctions per minute, the company has created the first global linguistic market and demonstrated that linguistic capitalism is a lucrative business domain, one in which billions of dollars can be realized per year. Google’s services need to be interpreted from this perspective. This article argues that linguistic capitalism implies not an economy of attention but an economy of expression. As several million users worldwide daily express themselves through one of Google’s interfaces, the texts they produce are systematically mediated by algorithms. In this new context, natural languages could progressively evolve to seamlessly integrate the linguistic biases of algorithms and the economical constraints of the global linguistic economy.
Analyse des réseaux de personnages dans Les Confessions de Jean-Jacques Rousseau
Les Cahiers du Numérique. 2014.
DOI : 10.3166/LCN.10.3.109‐133.
Cet article étudie le concept de centralité dans les réseaux de personnages apparaissant dans Les Confessions de Jean-Jacques Rousseau. Notre objectif est ainsi de caractériser certains aspects des rôles des personnages du récit sur la base de leurs cooccurrences dans le texte. We sketch a theoretical framework for literary network analysis, bringing together narratology, distant reading and social network analysis. We extract co-occurrences from a book index without the need for text analysis and describe the narrative roles of the characters. As a case study, we use the autobiographical novel Les Confessions from Jean-Jacques Rousseau. Eventually, we compute four types of centrality — degree, closeness, betweenness, eigenvector — and use these measures to propose a typology of narrative roles for the characters.
A Network Analysis Approach of the Venetian Incanto System
2014. Digital Humanities 2014 , Lausanne , July 7-12, 2014.
The objective of this paper was to perform new analyses about the structure and evolution of the Incanto system. The hypothesis was to go beyond the textual narrative or even cartographic representation thanks to network analysis, which could potentially offer a new perspective to understand this maritime system.
Character networks in Les Confessions from Jean-Jacques Rousseau
2014. Texas Digital Humanities Conference , Houston, Texas, USA , April 10-12, 2014.
Semi-Automatic Transcription Tool for Ancient Manuscripts
IC Research Day 2014: Challenges in Big Data, SwissTech Convention Center, Lausanne, Switzerland, June 12, 2014.
In this work, we investigate various techniques from the fields of shape analysis and image processing in order to construct a semi-automatic transcription tool for ancient manuscripts. First, we design a shape matching procedure using shape contexts, introduced in , and exploit this procedure to compute different distances between two arbitrary shapes/words. Then, we use Fischer discrimination to combine these distances in a single similarity measure and use it to naturally represent the words on a similarity graph. Finally, we investigate an unsupervised clustering analysis on this graph to create groups of semantically similar words and propose an uncertainty measure associated with the attribution of one word to a group. The clusters together with the uncertainty measure form the core of the semi-automatic transcription tool, that we test on a dataset of 42 words. The average classification accuracy achieved with this technique on this dataset is of 86%, which is quiet satisfying. This tool allows to reduce the actual number of words we need to type to transcript a document of 70%.
Attentional Processes in Natural Reading: the Effect of Margin Annotations on Reading Behaviour and Comprehension
2014. ACM Symposium on Eye Tracking Research and Applications , Safety Harbor, USA , March 26-28, 2014.
We present an eye tracking study to investigate how natural reading behavior and reading comprehension are influenced by in-context annotations. In a lab experiment, three groups of participants were asked to read a text and answer comprehension questions: a control group without taking annotations, a second group reading and taking annotations, and a third group reading a peer-annotated version of the same text. A self-made head-mounted eye tracking system was specifically designed for this experiment, in order to study how learners read and quickly re-read annotated paper texts, in low constrained experimental conditions. In the analysis, we measured the phenomenon of annotation-induced overt attention shifts in reading, and found that: (1) the reader's attention shifts toward a margin annotation more often when the annotation lies in the early peripheral vision, and (2) the number of attention shifts, between two different types of information units, is positively related to comprehension performance in quick re-reading. These results can be translated into potential criteria for knowledge assessment systems.
3D Model-Based Gaze Estimation in Natural Reading: a Systematic Error Correction Procedure based on Annotated Texts
Teaching & PhD
- Digital Humanities
- Doctoral program in computer and communication sciences
- Doctoral Program in Technology Management
- Doctoral Program in Architecture and Sciences of the City
- Doctoral Program Digital Humanities
This course gives an introduction to the fundamental concepts and methods of the Digital Humanities, both from a theoretical and applied point of view. The course introduces the Digital Humanities circle of processing and interpretation, from data acquisi...
The ambition of this course is to give a panorama of the development of historical cartography in Europe and to demonstrate the manner in which these documents can be used to build historical geographical information systems (HGIS).