Frédéric Kaplan
EPFL CDH DHI DHLAB
INN 141 (Bâtiment INN)
Station 14
1015 Lausanne
+41 21 693 02 53
+41 21 693 19 01
Local:
INN 141
EPFL
>
CDH
>
DHI
>
DHLAB
Web site: Site web: https://dhlab.epfl.ch
EPFL IC-DO
INN 141 (Bâtiment INN)
Station 14
1015 Lausanne
Web site: Site web: https://ic.epfl.ch/page8797.html
EPFL Pavilions
INN 141 (Bâtiment INN)
Station 14
1015 Lausanne
+41 21 693 02 53
Local:
INN 141
EPFL
>
CDH
>
CDH-EPFL-PAVILIONS
>
EPFL-PAVILIONS-GE
Web site: Site web: https://artlab.epfl.ch/
Courte biographie
Le professeur Frédéric Kaplan dirige le Collège des Humanités de à l'École polytechnique fédérale de Lausanne (EPFL). Il est également titulaire de la chaire de Digital Humanities (humanités digitales / humanités numériques) et président de la Time Machine Organisation, une entité à but non lucratif regroupant plus de 600 institutions. Il est l'auteur d'une dizaine de livres, traduits en plusieurs langues, et de plus d'une centaine de publications scientifiques. Ses travaux ont également donné lieu à des expositions dans plusieurs grands musées dont la Biennale d'architecture de Venise, le Grand Palais, le Centre Pompidou à Paris et le Museum of Modern Art à New York.Publications
Publications Infoscience
2024
[147] Post-correction of Historical Text Transcripts with Large Language Models: An Exploratory Study
The quality of automatic transcription of heritage documents, whether from printed, manuscripts or audio sources, has a decisive impact on the ability to search and process historical texts. Although significant progress has been made in text recognition (OCR, HTR, ASR), textual materials derived from library and archive collections remain largely erroneous and noisy. Effective post-transcription correction methods are therefore necessary and have been intensively researched for many years. As large language models (LLMs) have recently shown exceptional performances in a variety of text-related tasks, we investigate their ability to amend poor historical transcriptions. We evaluate fourteen foundation language models against various post-correction benchmarks comprising different languages, time periods and document types, as well as different transcription quality and origins. We compare the performance of different model sizes and different prompts of increasing complexity in zero and few-shot settings. Our evaluation shows that LLMs are anything but efficient at this task. Quantitative and qualitative analyses of results allow us to share valuable insights for future work on post-correcting historical texts with LLMs.
2024-02-18. The 8th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature , St Julian's, Malta , March 22, 2024. p. 133-159.2023
[146] Where Did the News Come From? Detection of News Agency Releases in Historical Newspapers
Since their beginnings in the 1830s and 1840s, news agencies have played an important role in the national and international news market, aiming to deliver news as fast and as reliable as possible. While we know that newspapers have been using agency content for a long time to produce their stories, the amount to which the agencies shape our news often remains unclear. Although researchers have already addressed this question, recently by using computational methods to assess the influence of news agencies at present, large-scale studies on the role of news agencies in the past continue to be rare. This thesis aims to bridge this gap by detecting news agencies in a large corpus of Swiss and Luxembourgish newspaper articles (the impresso corpus) for the years 1840-2000 using deep learning methods. For this, we first build and annotate a multilingual dataset with news agency mentions, which we then use to train and evaluate several BERT-based agency detection and classification models. Based on these experiments, we choose two models (for French and German) for the inference on the impresso corpus. Results show that ca. 10% of the articles explicitly reference news agencies, with the greatest share of agency content after 1940, although systematic citation of agencies already started slowly in the 1910s. Differences in the usage of agency content across time, countries and languages as well as between newspapers reveal a complex network of news flows, whose exploration provides many opportunities for future work.
2023-08-18Advisor(s) : M. Ehrmann; E. Boros; M. Duering; F. Kaplan
[145] From Archival Sources to Structured Historical Information: Annotating and Exploring the "Accordi dei Garzoni"
If automatic document processing techniques have achieved a certain maturity for present time documents, the transformation of hand-written documents into well-represented, structured and connected data which can satisfactorily be used for historical study purposes is not straightforward and still presents major challenges. Transitioning from documents to structured data was one of the key challenges faced by the Garzoni project and this chapter details the techniques and the steps taken to represent, extract, enhance and exploit the information contained in the archival material.
Apprenticeship, Work, Society in Early Modern Venice; Abingdon: Routledge, Taylor & Francis Group,2023-02-10.
p. 304.DOI : 10.4324/9781003197195-6.
[144] Machine-Learning-Enhanced Procedural Modeling for 4D Historical Cities Reconstruction
The generation of 3D models depicting cities in the past holds great potential for documentation and educational purposes. However, it is often hindered by incomplete historical data and the specialized expertise required. To address these challenges, we propose a framework for historical city reconstruction. By integrating procedural modeling techniques and machine learning models within a Geographic Information System (GIS) framework, our pipeline allows for effective management of spatial data and the generation of detailed 3D models. We developed an open-source Python module that fills gaps in 2D GIS datasets and directly generates 3D models up to LOD 2.1 from GIS files. The use of the CityJSON format ensures interoperability and accommodates the specific needs of historical models. A practical case study using footprints of the Old City of Jerusalem between 1840 and 1940 demonstrates the creation, completion, and 3D representation of the dataset, highlighting the versatility and effectiveness of our approach. This research contributes to the accessibility and accuracy of historical city models, providing tools for the generation of informative 3D models. By incorporating machine learning models and maintaining the dynamic nature of the models, we ensure the possibility of supporting ongoing updates and refinement based on newly acquired data. Our procedural modeling methodology offers a streamlined and open-source solution for historical city reconstruction, eliminating the need for additional software and increasing the usability and practicality of the process.
Remote Sensing
2023
DOI : 10.3390/rs15133352
[143] Ce que les machines ont vu et que nous ne savons pas encore
Cet article conceptualise l’idée qu’il existe une « matière noire » composée des structurations latentes identifiées par le regard machinique sur de grandes collections photographiques patrimoniales. Les campagnes photographiques de l’histoire de l’art, au xxe siècle, avaient pour ambition implicite de transformer toutes les œuvres d’art en documents plus facilement étudiables. Au fil du temps, la création de ces collections visuelles a permis de produire un corpus d’informations potentiellement plus dense et plus riche que ce que ses créateurs avaient initialement imaginé. En effet, la conversion numérique de ces immenses corpus visuels permet aujourd’hui de réanalyser ces images avec des techniques de vision par ordinateur, l’intelligence artificielle ouvrant ainsi la voie à des perspectives d’études bien différentes de celles envisageables au siècle dernier. Nous pourrions ainsi dire qu’il y a dans ces images un immense potentiel latent de connaissance, un réseau dense de relations qui n’a pas encore été mis en lumière. Qu’est-ce que les machines ont vu ou vont pouvoir voir dans ces collections d’images que l’homme n’a pas encore identifié ? Quelle étendue la connaissance visuelle de l’homme couvre-t-elle par rapport à ce que la machine a pu analyser ? Les nouvelles techniques d’indexation des images et des motifs qui les constituent nous rapprochent d’une révolution copernicienne du visuel dans laquelle l’homme peut, grâce à la machine-prothèse, analyser beaucoup plus d’images qu’il ne pouvait le faire par une simple activité mnémonique et sélectionner des perspectives spécifiques en comparant des ensembles de motifs les uns par rapport aux autres. Cette vision augmentée est fondée sur une pré-analyse conduite par la machine sur l’ensemble de ces corpus visuels, un entraînement qui permet de retrouver la structure sous-jacente du système d’images. La vision humaine est ainsi étendue par le regard artificiel préalable de la machine. Pour comprendre les enjeux de cette nouvelle alliance, il faut étudier la nature de ce regard artificiel, comprendre son potentiel pour découvrir des structures jusqu’à présent inconnues et anticiper les nouvelles formes de connaissances humaines auxquelles il pourra donner naissance. L’enjeu sera donc, pour les prochaines années, de comprendre ce que les machines ont vu et que nous ne savons pas encore.
Sociétés & Représentations
2023
DOI : 10.3917/sr.055.0249
2022
[142] Automatic table detection and classification in large-scale newspaper archives
In recent decades, major efforts to digitize historical documents led to the creation of large machine readable corpora, including newspapers, which are waiting to be processed and analyzed. Newspapers are a valuable historical source, notably because of the plurality of subjects and points of view they cover; however their heterogeneity due to their diachronic properties and their visual richness makes them difficult to deal with. Certain recurring elements, such as tables, which are powerful layout objects because of their ability to easily convey a large amount of information through their logical visual arrangement, play a role in the difficulty of processing them. This thesis focuses on automatic table processing in large-scale newspaper archives. Starting from a large corpus of Luxembourgish newspapers annotated with tables, we propose a statistical exploration of this dataset as well as strategies to address its annotation inconsistencies and to automatically bootstrap a training dataset for table classification. We also explore the ability of deep learning methods to detect and semantically classify tables. The performance of image segmentation models are compared in a series of experiments around their ability to learn under challenging conditions, while classifiers based on different combinations of data modalities are evaluated on the task of table classification. Results show that visual models are able to detect tables by learning on an inconsistent ground truth, and that adding further modalities increases classification performance.
2022-02-08Advisor(s) : M. Ehrmann; S. Clematide; F. Kaplan
[141] Opacité et transparence dans le design d'un dispositif de surveillance urbain : le cas de l'IMSI catcher
This thesis assesses the surveillance operated on the mobile phone network by governmental actors (intelligence agencies, police, army) and the relationship between monitored spaces and their users. Indeed, some new surveillance devices used by intelligence services redefine surveillance spatiality raising new questions in this field of research. More specifically this research focuses on one specific object: the IMSI catcher, a monitoring apparatus of the cellular network that intercepts cellphones' identity and some communications in a given area by mimicking the activity of a cell tower. While this kind of device seems to offer a tactical and a security interest in the fight against terrorism and against crime, many civil liberties organisations such as the Electronic Frontier Foundation, Privacy International and _La Commission nationale de l'informatique et des libertés are concerned about the potential of an uncontrolled surveillance ; indeed, the controversial nature of the device could endanger certain individual and public rights. What is this technical object and which new issues comes with its use in surveillance? How and from which perspective is it problematic What does the IMSI catcher teaches us on the potential future of surveillance regimes? I look into this specific device case in a research framework at the intersection of design research practices, science and technology studies (STS) and surveillance studies. First, I deal with this surveillance apparatus as a technical object, from a perspective fed by the theoretical framework of _concretization_ and _technical lines_ proposed by Gilbert Simondon and Yves Deforge, through the analysis of a visual and technical documentation. Second, I use a research by design approach to explore certain assumptions regarding the nature of the object itself, its functioning and its "concrete" aspect â or rather "non-concrete" in the present case â with the help of approaches borrowed to reverse engineering and reconstitution, close to media archeology. Then, I explore possible opposition and protest trajectories with the help of prototypes designed with critical design and speculative design methods. Finally, through the writing of prospective scenarios, I build a design fiction that offers a synthesis, potentially subject to debate, around the IMSI catcher's uses, present and to come, and more broadly on the potential future of surveillance regimes.
Lausanne, EPFL, 2022.DOI : 10.5075/epfl-thesis-8838.
2021
[140] Aux portes du monde miroir
The Mirror World is no longer an imaginary device, a mirage in a distant future, it is a reality under construction. In Europe, Asia and on the American continent, large companies and the best universities are working to build the infrastructures, to define their functionalities, to specify their logistics. The Mirror World, in its asymptotic form, presents a quasi-continuous representation of the world in motion, integrating, virtually, all photographic perspectives. It is a new giant computational object, opening the way to new research methods or even probably to a new type of science. The economic and cultural stakes of this third platform are immense. If the Mirror World transforms access to knowledge for new generations, as the Web and Social Networks did in their time, it is our responsibility to understand and, if need be, bend its technological trajectory to make this new platform an environment for the critical knowledge of the past and the creative imagination of the future.
Revue Histoire de l’art : Humanités numériques
2021-06-29
[139] Une approche computationnelle du cadastre napoléonien de Venise
At the beginning of the 19th century, the Napoleonic administration introduced a new standardised description system to give an objective account of the form and functions of the city of Venice. The cadastre, deployed on a European scale, was offering for the first time an articulated and precise view of the structure of the city and its activities, through a methodical approach and standardised categories. With the use of digital techniques, based in particular on deep learning, it is now possible to extract from these documents an accurate and dense representation of the city and its inhabitants. By systematically checking the consistency of the extracted information, these techniques also evaluate the precision and systematicity of the surveyors’ work and therefore indirectly qualify the trust to be placed in the extracted information. This article reviews the history of this computational protosystem and describes how digital techniques offer not only systematic documentation, but also extraction perspectives for latent information, as yet uncharted, but implicitly present in this information system of the past.
Humanités numériques
2021-05-01
DOI : 10.4000/revuehn.1786
[138] Les vingt premières années du capitalisme linguistique : Enjeux globaux de la médiation algorithmique des langues
La médiation des flux linguistiques par une poignée d’acteurs mondiaux a permis la constitution de modèles de la langue dont les performances sont aujourd’hui sans précédents. Ce texte revient sur les thèses principales qui expliquent des dynamiques économiques et technologique à l’origine de ce phénomène, prolongeant les réflexions préalablement entamées (thèses 1 et 2), pour ensuite développer la question du développement et de l’usage des nouveaux modèles basé sur le capital linguistique accumulé (thèses 3 et 4) et les conséquences sur les nouvelles formes de médiation algorithmiques qui en découlent (thèse 5 et 6). L’enjeu est de comprendre le basculement qui s’opère dans l’économie de l’expression quand les algorithmes produisent des écrits de performativité supérieure à la langue courante et sont donc naturellement utilisés pour produire du texte par l'intermédiaire de prothèses. Nous assistons à ce que l’on pourrait qualifier de “second régime du capitalisme linguistique” allant au-delà de la vente aux enchères des mots pour proposer additionnellement la vente de services linguistiques se substituant à l’écriture même. Il devient alors possible, à l’image des services de traduction automatique, mais cette fois-ci pour sa propre langue, de s’exprimer avec plus d’efficacité et de produire de l’écrit adapté aux diverses exigences des situations de la vie professionnelle ou privée. Nous discutons comment la généralisation de ces prothèses linguistique pourrait conduire à une atrophie de la capacité expressive, pour l’écriture dans un premier temps et, peut-être, à terme pour l’oralité.
Prendre soin des l'informatique et des générations; Limoges: FYP Editions,2021.
[137] Combining Visual and Textual Features for Semantic Segmentation of Historical Newspapers
The massive amounts of digitized historical documents acquired over the last decades naturally lend themselves to automatic processing and exploration. Research work seeking to automatically process facsimiles and extract information thereby are multiplying with, as a first essential step, document layout analysis. Although the identification and categorization of segments of interest in document images have seen significant progress over the last years thanks to deep learning techniques, many challenges remain with, among others, the use of more fine-grained segmentation typologies and the consideration of complex, heterogeneous documents such as historical newspapers. Besides, most approaches consider visual features only, ignoring textual signal. We introduce a multimodal neural model for the semantic segmentation of historical newspapers that directly combines visual features at pixel level with text embedding maps derived from, potentially noisy, OCR output. Based on a series of experiments on diachronic Swiss and Luxembourgish newspapers, we investigate the predictive power of visual and textual features and their capacity to generalize across time and sources. Results show consistent improvement of multimodal models in comparison to a strong visual baseline, as well as better robustness to the wide variety of our material.
Journal of Data Mining & Digital Humanities
2021
DOI : 10.5281/zenodo.4065271
2020
[136] I sistemi di immagini nell’archivio digitale di Vico Magistretti
La messa a disposizione in linea dell’archivio digitalizzato di Vico Magistretti che raggruppa decine di migliaia di disegni preparatori, disegni tecnici e fotografie prodotte tra 1946 e il 2006, apre la strada a un grande rinnovamento delle ricerche sul designer e architetto italiano. L’apertura di questo archivio così speciale ci invita a immaginare diverse prospettive che possono essere considerate per esplorare, visualizzare e studiare un tale insieme di documenti.
Narrare con l'Archivio. Forum internazionale, 19 novembre 2020, Milan, Italy, Novembre 19, 2020.[135] Swiss in motion : Analyser et visualiser les rythmes quotidiens. Une première approche à partir du dispositif Time-Machine.
Au cours des 50 dernières années, les développements technologiques dans le domaine des transports et des télécommunications ont contribué à reconfigurer les comportements spatio-temporels (Kaufmann, 2008). Les individus bénéficient ainsi d’un large univers de choix en matière de modes de transport et de lieux accessibles pour réaliser leurs activités. Cette configuration influence en particulier les comportements de mobilité quotidienne qui tendent à se complexifier tant dans leur dimension spatiale que temporelle impliquant ainsi l’émergence de rythmes quotidiens intenses et complexes (Drevon, Gwiazdzinski, & Klein, 2017; Gutiérrez & García-Palomares, 2007). Des recherches récentes menées sur la Suisse (Drevon, Gumy, & Kaufmann, 2020) suggèrent que les rythmes quotidiens sont marqués par une importante diversité en matière de configuration spatio-temporelle et de densité d’activités (Drevon, Gumy, Kaufmann, & Hausser, 2019). La part des rythmes quotidiens qui correspond à la figure du métro-boulot-dodo est finalement relativement modeste. Cette diversité de rythmes quotidiens se déploie entre d’un côté des comportements très complexes et d’autres peu complexes qui se matérialisent à différentes échelles spatiales. Force est de constater que les outils d’analyse actuels en sciences sociales et en socio-économie des transports peinent encore à rendre compte des formes complexes de rythmes quotidiens au niveau individuel et territorial. Face à cet enjeu épistémologique et méthodologique, la communication propose une approche innovante et interdisciplinaire qui associe la Sociologie, la Géographie et les Sciences computationnelle. Il s’agit concrètement de proposer un outil de géo-visualisation des rythmes quotidiens au échelles individuelles et territoriales à partir des comportements spatio-temporels des habitants de la Suisse. L’objectif de cette démarche est de mettre en perspective les différentiels d’intensité en matière d’activité entre les situations sociales et les territoires. Les analyses s’appuient sur l’enquête Microrecensement Mobilité et Transports (MRMT) réalisée tous les 5 ans à l’échelle nationale par l’Office fédéral de la statistique et l’Office fédéral du développement territorial réalisé en 2015. Cette enquête est composée d’un échantillon 57 090 personnes qui ont été interrogées sur l’ensemble de leurs déplacements effectués la veille du jour d’enquête (protocole d’enquête CATI). La visualisation est réalisée à partir du dispositif Time-Machine (Kaplan, 2013; di Lenardo & Kaplan, 2015) qui permet de modéliser un environnement virtuel en 4D (Figure 1 : https://youtu.be/41-klvXLCqM) et de simuler le déploiement des activités et des déplacements quotidiens. Les premières simulations révèlent des régimes rythmiques contrastés à l’échelle individuelle qui se différencient selon les allures, la fréquence d’actions, l’échelle spatiale et la position sociale. Au niveau territoriales les visualisations laissent apparaitre des différentiels importants dans l’intensité d’usage du territoire par les individus et des spécificités spatiales constitutives des activités qui y sont réalisées. Ces premières visualisations permettent d’abord de révéler des inégalités sociales (genre, classe) face aux injonctions à l’activité (Viry, Ravalet, & Kaufmann, 2015; Drevon, 2019; Drevon & Kaufmann, 2020). Elle permettent aussi de rediscuter des modalités de catégorisation des territoires (Rérat, 2008; Schuler et al., 2007) à partir d’une approche dynamique qui témoigne de la réalité des activités temporaires remettant par ailleurs en perspective les principes de l’écologie urbaine factorielle (Pruvot & Weber-Klein, 1984) et en renforçant également l’intérêt de l’économie présentielle (Lejoux, 2009).
Swiss Mobility Conference, Lausanne, October 29-30, 2020.[134] A digital reconstruction of the 1630–1631 large plague outbreak in Venice
The plague, an infectious disease caused by the bacterium Yersinia pestis, is widely considered to be responsible for the most devastating and deadly pandemics in human history. Starting with the infamous Black Death, plague outbreaks are estimated to have killed around 100 million people over multiple centuries, with local mortality rates as high as 60%. However, detailed pictures of the disease dynamics of these outbreaks centuries ago remain scarce, mainly due to the lack of high-quality historical data in digital form. Here, we present an analysis of the 1630–1631 plague outbreak in the city of Venice, using newly collected daily death records. We identify the presence of a two-peak pattern, for which we present two possible explanations based on computational models of disease dynamics. Systematically digitized historical records like the ones presented here promise to enrich our understanding of historical phenomena of enduring importance. This work contributes to the recently renewed interdisciplinary foray into the epidemiological and societal impact of pre-modern epidemics.
Scientific Reports
2020-10-20
DOI : 10.1038/s41598-020-74775-6
[133] The Advent of the 4D Mirror World
The 4D Mirror World is considered to be the next planetary-scale information platform. This commentary gives an overview of the history of the converging trends that have progressively shaped this concept. It retraces how large-scale photographic surveys served to build the first 3D models of buildings, cities, and territories, how these models got shaped into physical and virtual globes, and how eventually the temporal dimension was introduced as an additional way for navigating not only through space but also through time. The underlying assumption of the early large-scale photographic campaign was that image archives had deeper depths of latent knowledge still to be mined. The technology that currently permits the advent of the 4D World through new articulations of dense photographic material combining aerial imagery, historic photo archives, huge video libraries, and crowd-sourced photo documentation precisely exploits this latent potential. Through the automatic recognition of “homologous points,” the photographic material gets connected in time and space, enabling the geometrical computation of hypothetical reconstructions accounting for a perpetually evolving reality. The 4D world emerges as a series of sparse spatiotemporal zones that are progressively connected, forming a denser fabric of representations. On this 4D skeleton, information of cadastral maps, BIM data, or any other specific layers of a geographical information system can be easily articulated. Most of our future planning activities will use it as a way not only to have smooth access to the past but also to plan collectively shared scenarios for the future.
Urban Planning
2020-06-30
DOI : 10.17645/up.v5i2.3133
[132] Neural networks for semantic segmentation of historical city maps: Cross-cultural performance and the impact of figurative diversity
In this work, we present a new semantic segmentation model for historical city maps that surpasses the state of the art in terms of flexibility and performance. Research in automatic map processing is largely focused on homogeneous corpora or even individual maps, leading to inflexible algorithms. Recently, convolutional neural networks have opened new perspectives for the development of more generic tools. Based on two new maps corpora, the first one centered on Paris and the second one gathering cities from all over the world, we propose a method for operationalizing the figuration based on traditional computer vision algorithms that allows large-scale quantitative analysis. In a second step, we propose a semantic segmentation model based on neural networks and implement several improvements. Finally, we analyze the impact of map figuration on segmentation performance and evaluate future ways to improve the representational flexibility of neural networks. To conclude, we show that these networks are able to semantically segment map data of a very large figurative diversity with efficiency.
2020[131] Historical Newspaper Content Mining: Revisiting the impresso Project's Challenges in Text and Image Processing, Design and Historical Scholarship
Long abstract for a presentation at DH2020 (online).
2020. Digital Humanities Conference (DH) , Ottawa, Canada , July 20-24, 2020.DOI : 10.5281/zenodo.4641894.
[130] Building a Mirror World for Venice
Between 2012 and 2019, ‘TheVeniceTime Machine Project’ developed a new methodology for modelling the past, present, and future of a city. This methodology is based on two pillars: (a) the vast digitisation and processing of the selected city’s historical records, (b) the digitisation of the city itself, another vast undertaking. The combination of these two processes has the potential to create a new kind of historical information system organised around a diachronic digital twin of a city.
The Aura in the Age of Digital Materiality : Rethinking Preservation in the Shadow of an Uncertain Future; Milan: SilvanaEditoriale,2020.
2019
[129] Transforming scholarship in the archives through handwritten text recognition Transkribus as a case study
Purpose An overview of the current use of handwritten text recognition (HTR) on archival manuscript material, as provided by the EU H2020 funded Transkribus platform. It explains HTR, demonstrates Transkribus, gives examples of use cases, highlights the affect HTR may have on scholarship, and evidences this turning point of the advanced use of digitised heritage content. The paper aims to discuss these issues. Design/methodology/approach This paper adopts a case study approach, using the development and delivery of the one openly available HTR platform for manuscript material. Findings Transkribus has demonstrated that HTR is now a useable technology that can be employed in conjunction with mass digitisation to generate accurate transcripts of archival material. Use cases are demonstrated, and a cooperative model is suggested as a way to ensure sustainability and scaling of the platform. However, funding and resourcing issues are identified. Research limitations/implications - The paper presents results from projects: further user studies could be undertaken involving interviews, surveys, etc. Practical implications - Only HTR provided via Transkribus is covered: however, this is the only publicly available platform for HTR on individual collections of historical documents at time of writing and it represents the current state-of-the-art in this field. Social implications The increased access to information contained within historical texts has the potential to be transformational for both institutions and individuals. Originality/value This is the first published overview of how HTR is used by a wide archival studies community, reporting and showcasing current application of handwriting technology in the cultural heritage sector.
Journal Of Documentation
2019-09-09
DOI : 10.1108/JD-07-2018-0114
[128] A deep learning approach to Cadastral Computing
This article presents a fully automatic pipeline to transform the Napoleonic Cadastres into an information system. The cadastres established during the first years of the 19th century cover a large part of Europe. For many cities they give one of the first geometrical surveys, linking precise parcels with identification numbers. These identification numbers points to registers where the names of the proprietary. As the Napoleonic cadastres include millions of parcels , it therefore offers a detailed snapshot of large part of Europe’s population at the beginning of the 19th century. As many kinds of computation can be done on such a large object, we use the neologism “cadastral computing” to refer to the operations performed on such datasets. This approach is the first fully automatic pipeline to transform the Napoleonic Cadastres into an information system.
2019-07-11. Digital Humanities Conference , Utrecht, Netherlands , July 8-12, 2019.[127] Repopulating Paris: massive extraction of 4 Million addresses from city directories between 1839 and 1922
In 1839, in Paris, the Maison Didot bought the Bottin company. Sébastien Bottin trained as a statistician was the initiator of a high impact yearly publication, called “Almanachs" containing the listing of residents, businesses and institutions, arranged geographically, alphabetically and by activity typologies (Fig. 1). These regular publications encountered a great success. In 1820, the Parisian Bottin Almanach contained more than 50 000 addresses and until the end of the 20th century the word “Bottin” was the colloquial term to designate a city directory in France. The publication of the “Didot-Bottin” continued at an annual rhythm, mapping the evolution of the active population of Paris and other cities in France.The relevance of automatically mining city directories for historical reconstruction has already been argued by several authors (e.g Osborne, N., Hamilton, G. and Macdonald, S. 2014 or Berenbaum, D. et al. (2016). This article reports on the extraction and analysis of the data contained in “Didot-Bottin” covering the period 1839-1922 for Paris, digitized by the Bibliotheque nationale de France. We process more than 27 500 pages to create a database of 4,2 Million entries linking addresses, person mention and activities.
2019-07-02. Digital Humanities Conference 2019 (DH2019) , Utrecht , the Netherlands , July 9-12, 2019.DOI : 10.34894/MNF5VQ.
[126] Frederic Kaplan Isabella di Lenardo
Apollo-The International Art Magazine
2019-01-01
2018
[125] dhSegment : A generic deep-learning approach for document segmentation
The 16th International Conference on Frontiers in Handwriting Recognition, Niagara Falls, USA, 5-8 August 2018.[124] Comparing human and machine performances in transcribing 18th century handwritten Venetian script
Automatic transcription of handwritten texts has made important progress in the recent years. This increase in performance, essentially due to new architectures combining convolutional neural networks with recurrent neutral networks, opens new avenues for searching in large databases of archival and library records. This paper reports on our recent progress in making million digitized Venetian documents searchable, focusing on a first subset of 18th century fiscal documents from the Venetian State Archives. For this study, about 23’000 image segments containing 55’000 Venetian names of persons and places were manually transcribed by archivists, trained to read such kind of handwritten script. This annotated dataset was used to train and test a deep learning architecture with a performance level (about 10% character error rate) that is satisfactory for search use cases. This paper compares this level of reading performance with the reading capabilities of Italian-speaking transcribers. More than 8500 new human transcriptions were produced, confirming that the amateur transcribers were not as good as the expert. However, on average, the machine outperforms the amateur transcribers in this transcription tasks.
2018-07-26. Digital Humanities Conference , Mexico City, Mexico , June 24-29, 2018.[123] The Scholar Index: Towards a Collaborative Citation Index for the Arts and Humanities
Mexico City, 26-29 June 2018.[122] New Techniques for the Digitization of Art Historical Photographic Archives - the Case of the Cini Foundation in Venice
Numerous libraries and museums hold large art historical photographic collections, numbering millions of images. Because of their non-standard format, these collections pose special challenges for digitization. This paper address these difficulties by proposing new techniques developed for the digitization of the photographic archive of the Cini Foundation. This included the creation of a custom-built circular, rotating scanner. The resulting digital images were then automatically indexed, while artificial intelligence techniques were employed in information extraction. Combined, these tools vastly sped processes which were traditionally undertaken manually, paving the way for new ways of exploring the collections.
Archiving Conference
2018-02-01
DOI : 10.2352/issn.2168-3204.2018.1.0.2
[121] Extracting And Aligning Artist Names in Digitized Art Historical Archives
The largest collections of art historical images are not found online but are safeguarded by museums and other cultural institutions in photographic libraries. These collections can encompass millions of reproductions of paintings, drawings, engravings and sculptures. The 14 largest institutions hold together an estimated 31 million images (Pharos). Manual digitization and extraction of image metadata undertaken over the years has succeeded in placing less than 100,000 of these items for search online. Given the sheer size of the corpus, it is pressing to devise new ways for the automatic digitization of these art historical archives and the extraction of their descriptive information (metadata which can contain artist names, image titles, and holding collection). This paper focuses on the crucial pre-processing steps that permit the extraction of information directly from scans of a digitized photo collection. Taking the photographic library of the Giorgio Cini Foundation in Venice as a case study, this paper presents a technical pipeline which can be employed in the automatic digitization and information extraction of large collections of art historical images. In particular, it details the automatic extraction and alignment of artist names to known databases, which opens a window into a collection whose contents are unknown. Numbering nearing one million images, the art history library of the Cini Foundation was established in the mid-twentieth century to collect and record the history of Venetian art. The current study examines the corpus of the 330’000+ digitized images.
2018. Digital Humanities Conference 2018 Puentes-Bridges , Mexico City , June 26-29, 2018.[120] Negentropic linguistic evolution: A comparison of seven languages
The relationship between the entropy of language and its complexity has been the subject of much speculation – some seeing the increase of linguistic entropy as a sign of linguistic complexification or interpreting entropy drop as a marker of greater regularity. Some evolutionary explanations, like the learning bottleneck hypothesis, argues that communication systems having more regular structures tend to have evolutionary advantages over more complex structures. Other structural effects of communication networks, like globalization of exchanges or algorithmic mediation, have been hypothesized to have a regularization effect on language. Longer-term studies are now possible thanks to the arrival of large-scale diachronic corpora, like newspaper archives or digitized libraries. However, simple analyses of such datasets are prone to misinterpretations due to significant variations of corpus size over the years and the indirect effect this can have on various measures of language change and linguistic complexity. In particular, it is important not to misinterpret the arrival of new words as an increase in complexity as this variation is intrinsical, as is the variation of corpus size. This paper is an attempt to conduct an unbiased diachronic study of linguistic complexity over seven different languages using the Google Books corpus. The paper uses a simple entropy measure on a closed, but nevertheless large, subset of words, called kernels. The kernel contains only the words that are present without interruption for the whole length of the study. This excludes all the words that arrived or disappeared during the period. We argue that this method is robust towards variations of corpus size and permits to study change in complexity despite possible (and in the case of Google Books unknown) change in the composition of the corpus. Indeed, the evolution observed on the seven different languages shows rather different patterns that are not directly correlated with the evolution of the size of the respective corpora. The rest of the paper presents the methods followed, the results obtained and the next steps we envision.
2018. Digital Humanities 2018 , Mexico City, Mexico , June 26-29, 2018.[119] dhSegment: A generic deep-learning approach for document segmentation
In recent years there have been multiple successful attempts tackling document processing problems separately by designing task specific hand-tuned strategies. We argue that the diversity of historical document processing tasks prohibits to solve them one at a time and shows a need for designing generic approaches in order to handle the variability of historical series. In this paper, we address multiple tasks simultaneously such as page extraction, baseline extraction, layout analysis or multiple typologies of illustrations and photograph extraction. We propose an open-source implementation of a CNN-based pixel-wise predictor coupled with task dependent post-processing blocks. We show that a single CNN-architecture can be used across tasks with competitive results. Moreover most of the task-specific post-precessing steps can be decomposed in a small number of simple and standard reusable operations, adding to the flexibility of our approach.
2018-01-01. 16th International Conference on Frontiers in Handwriting Recognition (ICFHR) , Niagara Falls, NY , Aug 05-08, 2018. p. 7-12.DOI : 10.1109/ICFHR-2018.2018.00011.
[118] Deep Learning for Logic Optimization Algorithms
The slowing down of Moore's law and the emergence of new technologies puts an increasing pressure on the field of EDA. There is a constant need to improve optimization algorithms. However, finding and implementing such algorithms is a difficult task, especially with the novel logic primitives and potentially unconventional requirements of emerging technologies. In this paper, we cast logic optimization as a deterministic Markov decision process (MDP). We then take advantage of recent advances in deep reinforcement learning to build a system that learns how to navigate this process. Our design has a number of desirable properties. It is autonomous because it learns automatically and does not require human intervention. It generalizes to large functions after training on small examples. Additionally, it intrinsically supports both single-and multioutput functions, without the need to handle special cases. Finally, it is generic because the same algorithm can be used to achieve different optimization objectives, e. g., size and depth.
2018-01-01. IEEE International Symposium on Circuits and Systems (ISCAS) , Florence, ITALY , May 27-30, 2018.DOI : 10.1109/ISCAS.2018.8351885.
[117] Making large art historical photo archives searchable
In recent years, museums, archives and other cultural institutions have initiated important programs to digitize their collections. Millions of artefacts (paintings, engravings, drawings, ancient photographs) are now represented in digital photographic format. Furthermore, through progress in standardization, a growing portion of these images are now available online, in an easily accessible manner. This thesis studies how such large-scale art history collection can be made searchable using new deep learning approaches for processing and comparing images. It takes as a case study the processing of the photo archive of the Foundation Giorgio Cini, where more than 300'000 images have been digitized. We demonstrate how a generic processing pipeline can reliably extract the visual and textual content of scanned images, opening up ways to efficiently digitize large photo-collections. Then, by leveraging an annotated graph of visual connections, a metric is learnt that allows clustering and searching through artwork reproductions independently of their medium, effectively solving a difficult problem of cross-domain image search. Finally, the thesis studies how a complex Web Interface allows users to perform different searches based on this metric. We also evaluate the process by which users can annotate elements of interest during their navigation to be added to the database, allowing the system to be trained further and give better results. By documenting a complete approach on how to go from a physical photo-archive to a state-of-the-art navigation system, this thesis paves the way for a global search engine across the world's photo archives.
Lausanne, EPFL, 2018.DOI : 10.5075/epfl-thesis-8857.
[116] The Intellectual Organisation of History
A tradition of scholarship discusses the characteristics of different areas of knowledge, in particular after modern academia compartmentalized them into disciplines. The academic approach is often put to question: are there two or more cultures? Is an ever-increasing specialization the only way to cope with information abundance or are holistic approaches helpful too? What is happening with the digital turn? If these questions are well studied for the sciences, our understanding of how the humanities might differ in their own respect is far less advanced. In particular, modern academia might foster specific patterns of specialization in the humanities. Eventually, the recent rise in the application of digital methods to research, known as the digital humanities, might be introducing structural adaptations through the development of shared research technologies and the advent of organizational practices such as the laboratory. It therefore seems timely and urgent to map the intellectual organization of the humanities. This investigation depends on few traits such as the level of codification, the degree of agreement among scholars, the level of coordination of their efforts. These characteristics can be studied by measuring their influence on the outcomes of scientific communication. In particular, this thesis focuses on history as a discipline using bibliometric methods. In order to explore history in its complexity, an approach to create collaborative citation indexes in the humanities is proposed, resulting in a new dataset comprising monographs, journal articles and citations to primary sources. Historians' publications were found to organize thematically and chronologically, sharing a limited set of core sources across small communities. Core sources act in two ways with respect to the intellectual organization: locally, by adding connectivity within communities, or globally as weak ties across communities. Over recent decades, fragmentation is on the rise in the intellectual networks of historians, and a comparison across a variety of specialisms from the human, natural and mathematical sciences revealed the fragility of such networks across the axes of citation and textual similarities. Humanists organize into more, smaller and scattered topical communities than scientists. A characterisation of history is eventually proposed. Historians produce new historiographical knowledge with a focus on evidence or interpretation. The former aims at providing the community with an agreed-upon factual resource. Interpretive work is instead mainly focused on creating novel perspectives. A second axe refers to two modes of exploration of new ideas: in-breadth, where novelty relates to adding new, previously unknown pieces to the mosaic, or in-depth, if novelty then happens by improving on previous results. All combinations possible, historians tend to focus on in-breadth interpretations, with the immediate consequence that growth accentuates intellectual fragmentation in the absence of further consolidating factors such as theory or technologies. Research on evidence might have a different impact by potentially scaling-up in the digital space, and in so doing influence the modes of interpretation in turn. This process is not dissimilar to the gradual rise in importance of research technologies and collaborative competition in the mathematical and natural sciences. This is perhaps the promise of the digital humanities.
Lausanne, EPFL, 2018.DOI : 10.5075/epfl-thesis-8537.
[115] Mapping Affinities in Academic Organizations
Scholarly affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not leave visible traces in information systems; for instance, some peers may share interests without actually knowing it. This article illustrates the development of a map of affinities for academic collectives, designed to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, hereinafter ENAC. The school consists of around 1,000 scholars, 70 laboratories, and 3 institutes. The actual affinities are modeled using the data available from the information systems reporting publications, teaching, and advising scholars, whereas the potential affinities are addressed through text mining of the publications. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information; they also apply at different scales. The map, thus, shows local affinities inside a given laboratory, as well as global affinities among laboratories. This article presents a graphical grammar to represent affinities. Its effectiveness is illustrated by two actualizations of the design proposal: an interactive online system in which the map can be parameterized, and a large-scale carpet of 250 square meters. In both cases, we discuss how the materiality influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and their consequences in terms of governance, the understanding of the scholars’ own positioning in the academic group in order to foster opportunities for new collaborations and, eventually, the interpretation of the structure from a general public to evaluate the relevance of the tool for external communication.
Frontiers in Research Metrics and Analytics
2018-02-19
DOI : 10.3389/frma.2018.00004
[114] Mapping affinities: visualizing academic practice through collaboration
Academic affinities are one of the most fundamental hidden dynamics that drive scientific development. Some affinities are actual, and consequently can be measured through classical academic metrics such as co-authoring. Other affinities are potential, and therefore do not have visible traces in information systems; for instance, some peers may share scientific interests without actually knowing it. This thesis illustrates the development of a map of affinities for scientific collectives, which is intended to be relevant to three audiences: the management, the scholars themselves, and the external public. Our case study involves the School of Architecture, Civil and Environmental Engineering of EPFL, which consists of three institutes, seventy laboratories, and around one thousand employees. The actual affinities are modeled using the data available from the academic systems reporting publications, teaching, and advising, whereas the potential affinities are addressed through text mining of the documents registered in the information system. The major challenge for designing such a map is to represent the multi-dimensionality and multi-scale nature of the information. The affinities are not limited to the computation of heterogeneous sources of information, they also apply at different scales. Therefore, the map shows local affinities inside a given laboratory, as well as global affinities among laboratories. The thesis presents a graphical grammar to represent affinities. This graphical system is actualized in several embodiments, among which a large-scale carpet of 250 square meters and an interactive online system in which the map can be parameterized. In both cases, we discuss how the actualization influences the representation of data, in particular the way key questions could be appropriately addressed considering the three target audiences: the insights gained by the management and the relative decisions, the understanding of the researchersâ own positioning in the academic collective that might reveal opportunities for new synergies, and eventually the interpretation of the structure from an external standpoint that suggesting the relevance of the tool for communication.
EPFL, 2018.DOI : 10.5075/epfl-thesis-8242.
2017
[113] Layout Analysis on Newspaper Archives
The study of newspaper layout evolution through historical corpora has been addressed by diverse qualitative and quantitative methods in the past few years. The recent availability of large corpora of newspapers is now making the quantitative analysis of layout evolution ever more popular. This research investigates a method for the automatic detection of layout evolution on scanned images with a factorial analysis approach. The notion of eigenpages is defined by analogy with eigenfaces used in face recognition processes. The corpus of scanned newspapers that was used contains 4 million press articles, covering about 200 years of archives. This method can automatically detect layout changes of a given newspaper over time, rebuilding a part of its past publishing strategy and retracing major changes in its history in terms of layout. Besides these advantages, it also makes it possible to compare several newspapers at the same time and therefore to compare the layout changes of multiple newspapers based only on scans of their issues.
2017. Digital Humanities 2017 , Montreal, Canada , August 8-11, 2017.[112] Machine Vision Algorithms on Cadaster Plans
Cadaster plans are cornerstones for reconstructing dense representations of the history of the city. They provide information about the city urban shape, enabling to reconstruct footprints of most important urban components as well as information about the urban population and city functions. However, as some of these handwritten documents are more than 200 years old, the establishment of processing pipeline for interpreting them remains extremely challenging. We present the first implementation of a fully automated process capable of segmenting and interpreting Napoleonic Cadaster Maps of the Veneto Region dating from the beginning of the 19th century. Our system extracts the geometry of each of the drawn parcels, classifies, reads and interprets the handwritten labels.
2017. Premiere Annual Conference of the International Alliance of Digital Humanities Organizations (DH 2017) , Montreal, Canada , August 8-11, 2017.[111] Analyse multi-échelle de n-grammes sur 200 années d'archives de presse
The recent availability of large corpora of digitized texts over several centuries opens the way to new forms of studies on the evolution of languages. In this thesis, we study a corpus of 4 million press articles covering a period of 200 years. The thesis tries to measure the evolution of written French on this period at the level of words and expressions, but also in a more global way by attempting to define integrated measures of linguistic evolution. The methodological choice is to introduce a minimum of linguistic hypotheses in this study by developing new measures around the simple notion of n-gram, a sequence of n consecutive words. The thesis explores on this basis the potential of already known concepts as temporal frequency profiles and their diachronic correlations, but also introduces new abstractions such as the notion of resilient linguistic kernel or the decomposition of profiles into solidified expressions according to simple statistical models. Through the use of distributed computational techniques, it develops methods to test the relevance of these concepts on a large amount of textual data and thus allows to propose a virtual observatory of the diachronic evolutions associated with a given corpus. On this basis, the thesis explores more precisely the multi-scale dimension of linguistic phenomena by considering how standardized measures evolve when applied to increasingly long n-grams. The discrete and continuous scale from the isolated entities (n=1) to the increasingly complex and structured expressions (1 < n < 10) offers a transversal axis of study to the classical differentiations that ordinarily structure linguistics: syntax, semantics, pragmatics, and so on. The thesis explores the quantitative and qualitative diversity of phenomena at these different scales of language and develops a novel approach by proposing multi-scale measurements and formalizations, with the aim of characterizing more fundamental structural aspects of the studied phenomena.
Lausanne, EPFL, 2017.DOI : 10.5075/epfl-thesis-8180.
[110] A Simple Set of Rules for Characters and Place Recognition in French Novels
Frontiers in Digital Humanities
2017
DOI : 10.3389/fdigh.2017.00006
[109] Big Data of the Past
Big Data is not a new phenomenon. History is punctuated by regimes of data acceleration, characterized by feelings of information overload accompanied by periods of social transformation and the invention of new technologies. During these moments, private organizations, administrative powers, and sometimes isolated individuals have produced important datasets, organized following a logic that is often subsequently superseded but was at the time, nevertheless, coherent. To be translated into relevant sources of information about our past, these document series need to be redocumented using contemporary paradigms. The intellectual, methodological, and technological challenges linked to this translation process are the central subject of this article.
Frontiers in Digital Humanities
2017
DOI : 10.3389/fdigh.2017.00012
[108] Narrative Recomposition in the Context of Digital Reading
In any creative process, the tools one uses have an immediate influence on the shape of the final artwork. However, while the digital revolution has redefined core values in most creative domains over the last few decades, its impact on literature remains limited. This thesis explores the relevance of digital tools for several aspects of novels writing by focusing on two research questions: Is it possible for an author to edit better novels out of already published ones, given the access to adapted tools? And, will authors change their way of writing when they know how they are being read? This thesis is a multidisciplinary participatory study, actively involving the Swiss novelist Daniel de Roulet, to construct measures, visualizations, and digital tools aimed at leveraging the process of dynamic reordering of narrative material, similar to how one edits a video footage. We developed and tested various text analysis and visualization tools, the results of which were interpreted and used by the author to recompose a family saga out of material he has been writing for twenty-four years. Based on this research, we released Saga+, an online editing, publishing, and reading tool. The platform was handed out to third parties to improve existing writings, making new novels available to the public as a result. While many researchers have studied the structuration of texts either through global statistical features or micro-syntactic analyses, we demonstrate that by allowing visualization and interaction at an intermediary level of organisation, authors can manipulate their own texts in agile ways. By integrating readersâ traces into this newly revealed structure, authors can start to approach the question of optimizing their writing processes in ways that are similar to what is being practiced in other media industries. The introduction of tools for optimal composition opens new avenues for authors, as well as a controversial debate regarding the future of literature.
Lausanne, EPFL, 2017.DOI : 10.5075/epfl-thesis-7592.
[107] Optimized scripting in Massive Open Online Courses
The Time Machine MOOC, currently under preparation, is designed to provide the necessary knowledge for students to use the editing tool of the Time Machine platform. The first test case of the platform in centered on our current work on the City of Venice and its archives. Small Teaching modules focus on specific skills of increasing difficulty: segmenting a word on a page, transcribing a word from a document series, georeferencing ancient maps using homologous points, disambiguating named entities, redrawing urban structures, finding matching details between paintings and writing scripts that perform automatically some of these tasks. Other skills include actions in the physical world, like scanning pages, books, maps or performing a photogrammetric reconstruction of a sculpture taking a large number of pictures. Eventually, some other modules are dedicated to general historic, linguistic, technical or archival knowledge that constitute prerequisites for mastering specific tasks. A general dependency graph has been designed, specifying in which order the skills can be acquired. The performance of most tasks can be tested using some pre-defined exercises and evaluation metrics, which allows for a precise evaluation of the level of mastery of each student. When the student successfully passes the test related to a skill, he or she gets the credentials to use that specific tool in the platform and starts contributing. However, the teaching options can vary greatly for each skill. Building upon the script concept developed by Dillenbourg and colleagues, we designed each tutorial as a parameterized sequence. A simple gradient descent method is used to progressively optimize the parameters in order to maximize the success rate of the students at the skill tests and therefore seek a form of optimality among the various design choices for the teaching methods. Thus, the more students use the platform, the more efficient teaching scripts become.
Dariah Teach, Université de Lausanne, Switzerland, March 23-24, 2017.[106] The references of references: a method to enrich humanities library catalogs with citation data
The advent of large-scale citation indexes has greatly impacted the retrieval of scientific information in several domains of research. The humanities have largely remained outside of this shift, despite their increasing reliance on digital means for information seeking. Given that publications in the humanities have a longer than average life-span, mainly due to the importance of monographs for the field, this article proposes to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. Reference monographs are works considered to be of particular importance in a research library setting, and likely to possess characteristic citation patterns. The article shows how to select a corpus of reference monographs, and proposes a pipeline to extract the network of publications they refer to. Results using a set of reference monographs in the domain of the history of Venice show that only 7% of extracted citations are made to publications already within the initial seed. Furthermore, the resulting citation network suggests the presence of a core set of works in the domain, cited more frequently than average.
International Journal on Digital Libraries
2017
DOI : 10.1007/s00799-017-0210-1
[105] Studying Linguistic Changes over 200 Years of Newspapers through Resilient Words Analysis
This paper presents a methodology to analyze linguistic changes in a given textual corpus allowing to overcome two common problems related to corpus linguistics studies. One of these issues is the monotonic increase of the corpus size with time, and the other one is the presence of noise in the textual data. In addition, our method allows to better target the linguistic evolution of the corpus, instead of other aspects like noise fluctuation or topics evolution. A corpus formed by two newspapers “La Gazette de Lausanne” and “Le Journal de Genève” is used, providing 4 million articles from 200 years of archives. We first perform some classical measurements on this corpus in order to provide indicators and visualizations of linguistic evolution. We then define the concept of a lexical kernel and word resilience, to face the two challenges of noises and corpus size fluctuations. This paper ends with a discussion based on the comparison of results from linguistic change analysis and concludes with possible future works continuing in that direction.
Frontiers in Digital Humanities
2017
DOI : 10.3389/fdigh.2017.00002
2016
[104] From Documents to Structured Data: First Milestones of the Garzoni Project
Led by an interdisciplinary consortium, the Garzoni project undertakes the study of apprenticeship, work and society in early modern Venice by focusing on a specific archival source, namely the Accordi dei Garzoni from the Venetian State Archives. The project revolves around two main phases with, in the first instance, the design and the development of tools to extract and render information contained in the documents (according to Semantic Web standards) and, as a second step, the examination of such information. This paper outlines the main progress and achievements during the first year of the project.
DHCommons
2016
[103] Ancient administrative handwritten documents: virtual x-ray reading
A method for detecting ink writings in a specimen comprising stacked pages, allowing a page-by-page reading without turning pages The method compris- es steps of taking a set of projection x-ray images for different positions of the specimen with respect to an x-ray source and a detector from an apparatus for taking projection x-ray images; storing the set of projection x-ray images in a suitable computer system; and processing the set of projection x-ray images to tomographically reconstruct the shape of the specimen.
2016.
WO2015189817
[102] Rendre le passé présent
La conception d’un espace à quatre dimensions, dont la navigation agile, permet de réintroduire une continuité fluide entre le présent et le passé, s’inscrit dans l’ancien rêve philosophico-technologique de la machine à remonter le temps. Le moment historique auquel nous sommes convié s’inscrit comme la continuité d’un long processus ou fiction, technologie, science et culture se mêlent. La machine à remonter le temps est cet horizon toujours discuté, progressivement approché, et, aujourd’hui peut-être pour la première fois atteignable.
Forum des 100, Université de Lausanne, Switzerland, Mai, 2016.[101] La modélisation du temps dans les Digital Humanities
Les interfaces numériques sont chaque jour optimisées pour proposer des navigations sans frictions dans les multiples dimensions du présent. C’est cette fluidité, caractéristique de ce nouveau rapport à l’enregistrement documentaire, que les Digital Humanities pourraient réussir à reintroduire dans l’exploration du passé. Un simple bouton devrait nous permettre de glisser d’une représentation du présent à la représentation du même référent, il y a 10, 100 ou 1000 ans. Idéalement, les interfaces permettant la navigation dans le temps devraient pouvoir offrir la même agilité d’action que celle nous permettent de zoomer et des zoomer sur des objets aussi larges et denses que le globe terrestre. La recherche textuelle, nouvelle porte d’entrée de la connaissance depuis le le XXIe siècle devrait pouvoir s’étendre avec la même simplicité aux contenus des documents du passé. La recherche visuelle, second grand moment de l’indexation du monde et dont les premiers résultats commencent à s’inviter sur la quotidienneté de nos pratiques numériques, pourrait être la clé de voute de l’accès aux milliards de documents qu’il nous faut maintenant rendre accessible sous format numérique. Pour rendre le passé présent, il faudrait le restructurer selon les logiques des structures de la société numérique. Que deviendrait le temps dans cette transformation ? Une simple nouvelle dimension de l’espace ? La réponse est peut-être plus subtile.
Regimes temporels et sciences historiques, Bern, October, 14, 2016.[100] L’Europe doit construire la première Time Machine
Le projet Time Machine, en compétition dans la course pour les nouveaux FET Flagships, propose une infrastructure d’archivage et de calcul unique pour structurer, analyser et modéliser les données du passé, les réaligner sur le présent et permettre de se projeter vers l’avenir. Il est soutenu par 70 institutions provenant de 20 pays et par 13 programmes internationaux.
2016.
[99] Visual Link Retrieval in a Database of Paintings
This paper examines how far state-of-the-art machine vision algorithms can be used to retrieve common visual patterns shared by series of paintings. The research of such visual patterns, central to Art History Research, is challenging because of the diversity of similarity criteria that could relevantly demonstrate genealogical links. We design a methodology and a tool to annotate efficiently clusters of similar paintings and test various algorithms in a retrieval task. We show that pretrained convolutional neural network can perform better for this task than other machine vision methods aimed at photograph analysis. We also show that retrieval performance can be significantly improved by fine-tuning a network specifically for this task.
2016. VISART Workshop, ECCV , Amsterdam , September, 2016.DOI : 10.1007/978-3-319-46604-0_52.
[98] Diachronic Evaluation of NER Systems on Old Newspapers
In recent years, many cultural institutions have engaged in large-scale newspaper digitization projects and large amounts of historical texts are being acquired (via transcription or OCRization). Beyond document preservation, the next step consists in providing an enhanced access to the content of these digital resources. In this regard, the processing of units which act as referential anchors, namely named entities (NE), is of particular importance. Yet, the application of standard NE tools to historical texts faces several challenges and performances are often not as good as on contemporary documents. This paper investigates the performances of different NE recognition tools applied on old newspapers by conducting a diachronic evaluation over 7 time-series taken from the archives of Swiss newspaper Le Temps.
2016. 13th Conference on Natural Language Processing (KONVENS 2016)Conference on Natural Language Processing , Bochum, GermanyBochum, Germany , September 19-21, 2016September 19–21, 2016. p. 97-107.[97] Wikipedia's Miracle
Wikipedia has become the principle gateway to knowledge on the web. The doubts about information quality and the rigor of its collective negotiation process during its first couple of years have proved unfounded. Whether this delights or horrifies us, Wikipedia has become part of our lives. Both flexible in its form and content, the online encyclopedia will continue to constitute one of the pillars of digital culture for decades to come. It is time to go beyond prejudices and to study its true nature and better understand the emergence of this “miracle.”
Lausanne: EPFL PRESS.[96] Le miracle Wikipédia
Wikipédia s’est imposée comme la porte d’entrée principale de la connaissance sur le web. Les débats de ses premières années concernant la qualité des informations produites ou le bien-fondé du processus de négociation collective sont aujourd’hui dépassés. Que l’on s’en réjouisse ou qu’on le déplore, Wikipédia fait maintenant partie de notre vie. Flexible à la fois dans sa forme et dans ses contenus, l’encyclopédie en ligne continuera sans doute de constituer un des piliers de la culture numérique lors des prochaines décennies. Au-delà des préjugés, il s’agit maintenant d’étudier sa véritable nature et de comprendre à rebours comment un tel « miracle » a pu se produire.
Lausanne: Presses Polytechniques et Universitaires Romandes.[95] La culture internet des mèmes
Nous sommes à un moment de transition dans l’histoire des médias. Sur Internet, des millions de personnes produisent, altèrent et relaient des « mèmes », contenus numériques aux motifs stéréotypés. Cette « culture » offre un paysage nouveau, riche et complexe à étudier. Pour la première fois, un phénomène à la fois global et local, populaire et, d’une certaine manière, élitiste, construit, « médié » et structuré par la technique, peut être observé avec précision. Étudier les mèmes, c’est non seulement comprendre ce qu’est et sera peut-être la culture numérique, mais aussi inventer une nouvelle approche permettant de saisir la complexité des circulations de motifs à l’échelle mondiale.
Lausanne: Presses Polytechniques et Universitaires Romandes.[94] Visual Patterns Discovery in Large Databases of Paintings
The digitization of large databases of works of arts photographs opens new avenue for research in art history. For instance, collecting and analyzing painting representations beyond the relatively small number of commonly accessible works was previously extremely challenging. In the coming years,researchers are likely to have an easier access not only to representations of paintings from museums archives but also from private collections, fine arts auction houses, art historian However, the access to large online database is in itself not sufficient. There is a need for efficient search engines, capable of searching painting representations not only on the basis of textual metadata but also directly through visual queries. In this paper we explore how convolutional neural network descriptors can be used in combination with algebraic queries to express powerful search queries in the context of art history research.
2016. Digital Humanities 2016 , Krakow, Polland , July 11-16, 2016.[93] Visualizing Complex Organizations with Data
The Affinity Map is a project founded by the ENAC whose aim is to provide an instrument to understand organizations. The photograph shows the disclosure of the first map for the ENAC Research Day. The visualization was presented to scholars who are displayed in the representation itself.
IC Research Day, Lausanne, Switzerland, June 30, 2016.[92] Navigating through 200 years of historical newspapers
This paper aims to describe and explain the processes behind the creation of a digital library composed of two Swiss newspapers, namely Gazette de Lausanne (1798-1998) and Journal de Genève (1826-1998), covering an almost two-century period. We developed a general purpose application giving access to this cultural heritage asset; a large variety of users (e.g. historians, journalists, linguists and the general public) can search through the content of around 4 million articles via an innovative interface. Moreover, users are offered different strategies to navigate through the collection: lexical and temporal lookup, n-gram viewer and named entities.
2016. International Conference on Digital Preservation (IPRES) , Bern, Switzerland , October 3-6, 2016.[91] Studying Linguistic Changes on 200 Years of Newspapers
Large databases of scanned newspapers open new avenues for studying linguistic evolution. By studying a two-billion-word corpus corresponding to 200 years of newspapers, we compare several methods in order to assess how fast language is changing. After critically evaluating an initial set of methods for assessing textual distance between subsets corresponding to consecutive years, we introduce the notion of a lexical kernel, the set of unique words that maintain themselves over long periods of time. Focusing on linguistic stability instead of linguistic change allows building more robust measures to assess long term phenomena such as word resilience. By systematically comparing the results obtained on two subsets of the corpus corresponding to two independent newspapers, we argue that the results obtained are independent of the specificity of the chosen corpus, and are likely to be the results of more general linguistic phenomena.
2016. Digital Humanities 2016 , Kraków, Poland , July 11-16, 2016.[90] The References of References: Enriching Library Catalogs via Domain-Specific Reference Mining
The advent of large-scale citation services has greatly impacted the retrieval of scientific information for several domains of research. The Humanities have largely remained outside of this shift despite their increasing reliance on digital means for information seeking. Given that publications in the Humanities probably have a longer than average life-span, mainly due to the importance of monographs in the field, we propose to use domain-specific reference monographs to bootstrap the enrichment of library catalogs with citation data. We exemplify our approach using a corpus of reference monographs on the history of Venice and extracting the network of publications they refer to. Preliminary results show that on average only 7% of extracted references are made to publications already within such corpus, therefore suggesting that reference monographs are effective hubs for the retrieval of further resources within the domain.
2016. 3rd International Workshop on Bibliometric-enhanced Information Retrieval (BIR2016) , Padua, Italy , March 20-23, 2016. p. 32-43.2015
[89] S'affranchir des automatismes
Fabuleuses mutations, Cité des Sciences, December 8, 2015.[88] The Venice Time Machine
The Venice Time Machine is an international scientific programme launched by the EPFL and the University Ca’Foscari of Venice with the generous support of the Fondation Lombard Odier. It aims at building a multidimensional model of Venice and its evolution covering a period of more than 1000 years. The project ambitions to reconstruct a large open access database that could be used for research and education. Thanks to a parternship with the Archivio di Stato in Venice, kilometers of archives are currently digitized, transcribed and indexed setting the base of the largest database ever created on Venetian documents. The State Archives of Venice contain a massive amount of hand-written documentation in languages evolving from medieval times to the 20th century. An estimated 80 km of shelves are filled with over a thousand years of administrative documents, from birth registrations, death certificates and tax statements, all the way to maps and urban planning designs. These documents are often very delicate and are occasionally in a fragile state of conservation. In complementary to these primary sources, the content of thousands of monographies have been indexed and made searchable.
2015. ACM Symposium on Document Engineering , Lausanne, Switzerland , September 08 - 11, 2015.DOI : 10.1145/2682571.2797071.
[87] Venice Time Machine : Recreating the density of the past
This article discusses the methodology used in the Venice Time Machine project (http://vtm.epfl.ch) to reconstruct a historical geographical information system covering the social and urban evolution of Venice over a period of 1,000 years. Given the time span considered, the project used a combination of sources and a specific approach to align heterogeneous historical evidence into a single geographic database. The project is based on a mass digitization project of one of the largest archives in Venice, the Archivio di Stato. One goal of the project is to build a kind of ‘Google map’ of the past, presenting a hypothetical reconstruction of Venice in 2D and 3D for any year starting from the origins of the city to present-day Venice.
2015. Digital Humanities 2015 , Sydney , June 29 - July 3, 2015.[86] On Mining Citations to Primary and Secondary Sources in Historiography
We present preliminary results from the Linked Books project, which aims at analysing citations from the histo- riography on Venice. A preliminary goal is to extract and parse citations from any location in the text, especially footnotes, both to primary and secondary sources. We detail a pipeline for these tasks based on a set of classifiers, and test it on the Archivio Veneto, a journal in the domain.
2015. Clic-IT 2015 , Trento, Italy , December 3-4, 2015.[85] Text Line Detection and Transcription Alignment: A Case Study on the Statuti del Doge Tiepolo
In this paper, we propose a fully automatic system for the transcription alignment of historical documents. We introduce the ‘Statuti del Doge Tiepolo’ data that include images as well as transcription from the 14th century written in Gothic script. Our transcription alignment system is based on forced alignment technique and character Hidden Markov Models and is able to efficiently align complete document pages.
2015. Digital humanities , Sydney, Australia , June 29 - July 3, 2015.[84] Anatomy of a Drop-Off Reading Curve
Not all readers finish the book they start to read. Electronic media allow to us to measure more precisely how this “drop-off” effect unfolds as readers are reading a book. A curve showing how many people have read each chapter of a book is likely to be progressively going down as part of them interrupt their reading “journey”. This article is an initial study about the shape of these “dropoff” reading curves.
2015. DH2015 , Sydney, Australia , June 29 - July 3, 2015.[83] Inversed N-gram viewer: Searching the space of word temporal profiles
2015. Digital Humanities 2015 , Sydney, Australia , 29 June–3 July 2015.[82] Quelques réflexions préliminaires sur la Venice Time Machine
Encore aujourd’hui la plupart des historiens ont l’habitude de travailler en toutes petites équipes, se focalisant sur des problématiques très spécifiques. Ils n’échangent que très rarement leurs notes ou leurs données, percevant à tort ou à raison que leurs travaux de recherche préparatoire sont à la base de l’originalité de leurs travaux futurs. Prendre conscience de la dimension et la densité informationnelle des archives comme celle de Venise doit nous faire réaliser de l’impossibilité pour quelques historiens, travaillant de manière non coordonnée de couvrir avec une quelconque systématicité un objet aussi vaste. Si nous voulons tenter de transformer une archive de 80 kilomètres couvrant mille ans d’histoire en un système d’information structuré il nous faut développer un programme scientifique collaboratif, coordonné et massif. Nous sommes devant une entité informationnelle trop grande. Seule une collaboration scientifique internationale peut tenter d’en venir à bout.
L'archive dans quinze ans; Louvain-la-Neuve: Academia,2015.
p. 161--179.[81] A Map for Big Data Research in Digital Humanities
This article is an attempt to represent Big Data research in digital humanities as a structured research field. A division in three concentric areas of study is presented. Challenges in the first circle – focusing on the processing and interpretations of large cultural datasets – can be organized linearly following the data processing pipeline. Challenges in the second circle – concerning digital culture at large – can be structured around the different relations linking massive datasets, large communities, collective discourses, global actors, and the software medium. Challenges in the third circle – dealing with the experience of big data – can be described within a continuous space of possible interfaces organized around three poles: immersion, abstraction, and language. By identifying research challenges in all these domains, the article illustrates how this initial cartography could be helpful to organize the exploration of the various dimensions of Big Data Digital Humanities research.
Frontiers in Digital Humanities
2015
DOI : 10.3389/fdigh.2015.00001
[80] Mapping the Early Modern News Flow: An Enquiry by Robust Text Reuse Detection
Early modern printed gazettes relied on a system of news exchange and text reuse largely based on handwritten sources. The reconstruction of this information exchange system is possible by detecting reused texts. We present a method to individuate text borrowings within noisy OCRed texts from printed gazettes based on string kernels and local text alignment. We apply our methods on a corpus of Italian gazettes for the year 1648. Beside unveiling substantial overlaps in news sources, we are able to assess the editorial policy of different gazettes and account for a multi-faceted system of text reuse.
2015. HistoInformatics 2014 . p. 244-253.DOI : 10.1007/978-3-319-15168-7_31.
[79] X-ray spectrometry and imaging for ancient administrative handwritten documents
‘Venice Time Machine’ is an international program whose objective is transforming the ‘Archivio di Stato’ – 80 km of archival records documenting every aspect of 1000 years of Venetian history – into an open-access digital information bank. Our study is part of this project: We are exploring new, faster, and safer ways to digitalize manuscripts, without opening them, using X-ray tomography. A fundamental issue is the chemistry of the inks used for administrative documents: Contrary to pieces of high artistic or historical value, for such items, the composition is scarcely documented. We used X-ray fluorescence to investigate the inks of four Italian ordinary handwritten documents from the 15th to the 17th century. The results were correlated to X-ray images acquired with different techniques. In most cases, iron detected in the ‘iron gall’ inks produces image absorption contrast suitable for tomography reconstruction, allowing computer extraction of handwriting information from sets of projections. When absorption is too low, differential phase contrast imaging can reveal the characters from the substrate morphology
X-Ray Spectrometry
2015
DOI : 10.1002/xrs.2581
[78] Ancient administrative handwritten documents: X-ray analysis and imaging
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project
Journal of Synchrotron Radiation
2015
DOI : 10.1107/S1600577515000314
[77] Il pleut des chats et des chiens: Google et l'impérialisme linguistique
Au début du mois de décembre dernier, quiconque demandait à Google Traduction l’équivalent italien de l’expression « Cette fille est jolie » obtenait une proposition étrange : Questa ragazza è abbastanza, littéralement « Cette fille est assez ». La beauté s’était lost in translation — perdue en cours de traduction. Comment un des traducteurs automatiques les plus performants du monde, fort d’un capital linguistique unique constitué de milliards de phrases, peut-il commettre une erreur aussi grossière ? La réponse est simple : il passe par l’anglais. « Jolie » peut se traduire par pretty, qui signifie à la fois « joli » et « assez ». Le second sens correspond à l’italien abbastanza.
2015.
2014
[76] L'historien et l'algorithme
Les relations houleuses qu’histoire et informatique entretiennent ne sont pas nouvelles et la révolution des sciences historiques annoncée depuis plusieurs décennies continue de se faire attendre. Dans ce chapitre, nous aimerions néanmoins tenter de montrer qu’une évolution inédite est aujourd’hui à l’oeuvre dans les sciences historiques et que cette transformation est différente de celle qui a caractérisé, il y a quelques décennies l’arrivée de la « cliométrie » et des méthodes quantitatives. Notre hypothèse est que nous assistons par les effets de deux processus complémentaires à une généralisation des algorithmes comme objets médiateurs de la connaissance historique.
Le Temps des Humanités Digitales; FYP Editions,2014.
p. 49--63.[75] X-ray Spectrometry and imaging for ancient handwritten document
We detected handwritten characters in ancient documents from several centuries with different synchrotron x-ray imaging techniques. The results were correlated to those of x-ray fluorescence analysis. In most cases, heavy elements produced high image quality suitable for tomography reconstruction leading to virtual page-by-page “reading”. When absorption is too low, differential phase contrast (DPC) imaging can reveal the characters from the substrate morphology. This paves the way to new strategies for information harvesting during mass digitization programs. This study is part of the Venice Time Machine project, an international research program aiming at transforming the immense venetian archival records into an open access digital information system. The Archivio di Stato in Venice holds about 80 kms of archival records documenting every aspects of a 1000 years of Venetian history. A large part of these records take the form of ancient bounded registers that can only be digitize through cautious manual operations. Each page must be turned manually in order to be photographed. Our project explore new ways to virtually “read” manuscripts, without opening them,. We specifically plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the manuscripts, reducing the risk of damage and speeding up the process. The present tests demonstrate that the approach is feasible. Furthermore, they show that over a very long period of time the common recipes used in Europe for inks in “normal” handwritings - ship records, notary papers, commercial transactions, demographic accounts, etc. – very often produced a high concentration of heavy or medium-heavy elements such as Fe, Hg and Ca. This opens the way in general to x-ray analysis and imaging. Furthermore, it could lead to a better understanding of the deterioration mechanisms in the search for remedies. The most important among the results that we will present is tomographic reconstruction. We simulated books with stacks of manuscript fragments and obtained from sets of projection images individual views -- that correspond indeed to a virtual page-by-page “reading” without opening the volume.
2014. European Conference on X-Ray Spectrometry, EXRS2014 , Bologna .[74] Virtual X-ray Reading (VXR) of Ancient Administrative Handwritten Documents
The study of ancient documents is too often confined to specimens of high artistic value or to official writings. Yet, a wealth of information is often stored in administrative records such as ship records, notary papers, work contract, tax declaration, commercial transactions or demographic accounts. One of the best examples is the Venice Time Machine project that targets a massive digitization and information extraction program of Venetian archives. The Archivio di Stato in Venice holds about 80 kms of archival documents spanning over ten centuries and documenting every aspect of Venetian Mediterranean Empire. If unlocked and transformed in a digital information system, this information could change significantly our understanding of European history. We are exploring new ways to facilitate and speed up this broad task, exploiting x-ray techniques, notably those based on synchrotron light. . Specifically, we plan to use x-ray tomography to computer-extract page-by-page information from sets of projection images. The raw data can be obtained without opening or manipulating the bounded administrative registers, reducing the risk of damage and accelerating the process. We present here positive tests of this approach. First, we systematically analyzed the ink composition of a sample of Italian handwritings spanning over several centuries. Then, we performed x-ray imaging with different contrast mechanisms (absorption, scattering and refraction) using the differential phase contrast (DPC) mode of the TOMCAT beamline of the Swiss Light Source (SLS). Finally, we selected cases of high contrast to perform tomographic reconstruction and demonstrate page-by-page handwriting recognition. The experiments concerned both black inks from different centuries and red ink from the 15th century. For the majority of the specimens, we found in the ink areas heavy or medium-heavy elements such as Fe, Ca, Hg, Cu and Zn. This eliminates a major question about our approach, since the documentation on the nature of inks for ancient administrative records is quite scarce. As a byproduct, the approach can produce valuable information on the ink-substrate interaction with the objective to understand and prevent corrosion and deterioration.
2014. Synchrotron Radiation in Art and Archaeology, SR2A 14 .[73] La simulation humaine : le roman-fleuve comme terrain d'expérimentation narrative
Dans cet article nous présentons la démarche et les premiers résultats d’une recherche participative menée conjointement par le laboratoire d’humanités digitales de l’EPFL (DHLAB) et l’écrivain suisse Daniel de Roulet. Dans cette étude, nous explorons les façons dont la lecture numérique est susceptible d’influencer la façon d’écrire et de réorganiser des récits complexes, de type roman-fleuve ou saga. Nous exposons également nos premières conclusions ainsi que les possibles travaux futures, dans ce domaine très vaste et peu étudié à ce jour.
Cahiers de Narratologie
2014
DOI : 10.4000/narratologie.7042
[72] Character Networks and Centrality
A character network represents relations between characters from a text; the relations are based on text proximity, shared scenes/events, quoted speech, etc. Our project sketches a theoretical framework for character network analysis, bringing together narratology, both close and distant reading approaches, and social network analysis. It is in line with recent attempts to automatise the extraction of literary social networks (Elson, 2012; Sack, 2013) and other studies stressing the importance of character- systems (Woloch, 2003; Moretti, 2011). The method we use to build the network is direct and simple. First, we extract co-occurrences from a book index, without the need for text analysis. We then describe the narrative roles of the characters, which we deduce from their respective positions in the network, i.e. the discourse. As a case study, we use the autobiographical novel Les Confessions by Jean-Jacques Rousseau. We start by identifying co-occurrences of characters in the book index of our edition (Slatkine, 2012). Subsequently, we compute four types of centrality: degree, closeness, betweenness, eigenvector. We then use these measures to propose a typology of narrative roles for the characters. We show that the two parts of Les Confessions, written years apart, are structured around mirroring central figures that bear similar centrality scores. The first part revolves around the mentor of Rousseau; a figure of openness. The second part centres on a group of schemers, depicting a period of deep paranoia. We also highlight characters with intermediary roles: they provide narrative links between the societies in the life of the author. The method we detail in this complete case study of character network analysis can be applied to any work documented by an index.
University of Lausanne, 2014.[71] Encoding metaknowledge for historical databases
Historical knowledge is fundamentally uncertain. A given account of an historical event is typically based on a series of sources and on sequences of interpretation and reasoning based on these sources. Generally, the product of this historical research takes the form of a synthesis, like a narrative or a map, but does not give a precise account of the intellectual process that led to this result. Our project consists of developing a methodology, based on semantic web technologies, to encode historical knowledge, while documenting, in detail, the intellectual sequences linking the historical sources with a given encoding, also know as paradata. More generally, the aim of this methodology is to build systems capable of representing multiple historical realities, as they are used to document the underlying processes in the construction of possible knowledge spaces.
2014. Digital Humanities 2014 , Lausanne, Switzerland , July 7-12, 2014. p. 288-289.[70] La question de la langue à l'époque de Google
En 2012, Google a réalisé un chiffre d’affaires de 50 milliards de dollars un résultat financier impressionnant pour une entreprise créée il y a seulement une quinzaine d’années. 50 milliards de dollars représentent 140 millions de dollars par jour, 5 millions de dollars par heure. Si vous lisez ce chapitre en une dizaine de minutes, Google aura, entre temps, réalisé presque un million de dollars de revenu. Que vend Google pour réaliser des performances financières si impressionnantes ? Google vend des mots, des millions de mots.
Digital Studies Organologie des savoirs et technologies de la connaissance; Limoge: Fyp,2014.
p. 143-156.[69] Fantasmagories au musée
L'utilisation de plus en plus prégnante des nouvelles technologies dans les musées et bibliothèques (tablettes tactiles, audioguides, écrans interactifs, etc.) diviserait les publics entre ceux qui recherchent la compréhension et ceux pour qui prime l'émotion. Comment alors concilier expérience collective partagée et dispositifs techniques ? Comment des cartels virtuels flottant dans les airs peuvent devenir des "fantasmagories didactiques" ? Retour d'expérience muséographique de réalité mixte autour de l'utilisation de vitrines virtuelles "holographiques".
Alliage
2014
[68] A Preparatory Analysis of Peer-Grading for a Digital Humanities MOOC
Over the last two years, Massive Open Online Classes (MOOCs) have been unexpectedly successful in convincing large number of students to pursue online courses in a variety of domains. Contrary to the "learn anytime anywhere" moto, this new generation of courses are based on regular assignments that must be completed and corrected on a fixed schedule. Successful courses attracted about 50 000 students in the first week but typically stabilised around 10 000 in the following weeks, as most courses demand significant involvement. With 10 000 students, grading is obviously an issue, and the first successful courses tended to be technical, typically in computer science, where various options for automatic grading system could be envisioned. However, this posed a challenge for humanities courses. The solution that has been investigated for dealing with this issue is peer-grading: having students grade the work of one another. The intuition that this would work was based on some older results showing high correlation between professor grading, peer-grading and self-grading. The generality of this correlation can reasonably be questioned. There is a high chance that peer-grading works for certain domains, or for certain assignment, but not for others. Ideally this should be tested experimentally before launching any large-scale courses. EPFL is one of the first European schools to experiment with MOOCs in various domains. Since the launch of these first courses, preparing an introductory MOOC on Digital Humanities was one of our top priorities. However, we felt it was important to first validate the kind of peer-grading strategy we were planning to implement on a smaller set of students, to determine if it would actually work for the assignments we envisioned. This motivated the present study which was conducted during the first semester of our masters level introductory course on Digital Humanities at EPFL.
2014. Digital Humanities 2014 , Lausanne , 7-12 July. p. 227-229.[67] Linguistic Capitalism and Algorithmic Mediation
Google’s highly successful business model is based on selling words that appear in search queries. Organizing several million auctions per minute, the company has created the first global linguistic market and demonstrated that linguistic capitalism is a lucrative business domain, one in which billions of dollars can be realized per year. Google’s services need to be interpreted from this perspective. This article argues that linguistic capitalism implies not an economy of attention but an economy of expression. As several million users worldwide daily express themselves through one of Google’s interfaces, the texts they produce are systematically mediated by algorithms. In this new context, natural languages could progressively evolve to seamlessly integrate the linguistic biases of algorithms and the economical constraints of the global linguistic economy.
Representations
2014
[66] Analyse des réseaux de personnages dans Les Confessions de Jean-Jacques Rousseau
Cet article étudie le concept de centralité dans les réseaux de personnages apparaissant dans Les Confessions de Jean-Jacques Rousseau. Notre objectif est ainsi de caractériser certains aspects des rôles des personnages du récit sur la base de leurs cooccurrences dans le texte. We sketch a theoretical framework for literary network analysis, bringing together narratology, distant reading and social network analysis. We extract co-occurrences from a book index without the need for text analysis and describe the narrative roles of the characters. As a case study, we use the autobiographical novel Les Confessions from Jean-Jacques Rousseau. Eventually, we compute four types of centrality — degree, closeness, betweenness, eigenvector — and use these measures to propose a typology of narrative roles for the characters.
Les Cahiers du Numérique
2014
DOI : 10.3166/LCN.10.3.109‐133
[65] A Network Analysis Approach of the Venetian Incanto System
The objective of this paper was to perform new analyses about the structure and evolution of the Incanto system. The hypothesis was to go beyond the textual narrative or even cartographic representation thanks to network analysis, which could potentially offer a new perspective to understand this maritime system.
2014. Digital Humanities 2014 , Lausanne , July 7-12, 2014.[64] Character networks in Les Confessions from Jean-Jacques Rousseau
2014. Texas Digital Humanities Conference , Houston, Texas, USA , April 10-12, 2014.[63] Semi-Automatic Transcription Tool for Ancient Manuscripts
In this work, we investigate various techniques from the fields of shape analysis and image processing in order to construct a semi-automatic transcription tool for ancient manuscripts. First, we design a shape matching procedure using shape contexts, introduced in [1], and exploit this procedure to compute different distances between two arbitrary shapes/words. Then, we use Fischer discrimination to combine these distances in a single similarity measure and use it to naturally represent the words on a similarity graph. Finally, we investigate an unsupervised clustering analysis on this graph to create groups of semantically similar words and propose an uncertainty measure associated with the attribution of one word to a group. The clusters together with the uncertainty measure form the core of the semi-automatic transcription tool, that we test on a dataset of 42 words. The average classification accuracy achieved with this technique on this dataset is of 86%, which is quiet satisfying. This tool allows to reduce the actual number of words we need to type to transcript a document of 70%.
IC Research Day 2014: Challenges in Big Data, SwissTech Convention Center, Lausanne, Switzerland, June 12, 2014.[62] Attentional Processes in Natural Reading: the Effect of Margin Annotations on Reading Behaviour and Comprehension
We present an eye tracking study to investigate how natural reading behavior and reading comprehension are influenced by in-context annotations. In a lab experiment, three groups of participants were asked to read a text and answer comprehension questions: a control group without taking annotations, a second group reading and taking annotations, and a third group reading a peer-annotated version of the same text. A self-made head-mounted eye tracking system was specifically designed for this experiment, in order to study how learners read and quickly re-read annotated paper texts, in low constrained experimental conditions. In the analysis, we measured the phenomenon of annotation-induced overt attention shifts in reading, and found that: (1) the reader's attention shifts toward a margin annotation more often when the annotation lies in the early peripheral vision, and (2) the number of attention shifts, between two different types of information units, is positively related to comprehension performance in quick re-reading. These results can be translated into potential criteria for knowledge assessment systems.
2014. ACM Symposium on Eye Tracking Research and Applications , Safety Harbor, USA , March 26-28, 2014. p. 235–238.DOI : 10.1145/2578153.2578195.
[61] 3D Model-Based Gaze Estimation in Natural Reading: a Systematic Error Correction Procedure based on Annotated Texts
Studying natural reading and its underlying attention processes requires devices that are able to provide precise measurements of gaze without rendering the reading activity unnatural. In this paper we propose an eye tracking system that can be used to conduct analyses of reading behavior in low constrained experimental settings. The system is designed for dual-camera-based head-mounted eye trackers and allows free head movements and note taking. The system is composed of three different modules. First, a 3D model-based gaze estimation method computes the reader’s gaze trajectory. Second, a document image retrieval algorithm is used to recognize document pages and extract annotations. Third, a systematic error correction procedure is used to post-calibrate the system parameters and compensate for spatial drifts. The validation results show that the proposed method is capable of extracting reliable gaze data when reading in low constrained experimental conditions.
2014. ACM Symposium on Eye Tracking Research and Applications , Safety Harbor, USA , March 26-28, 2014. p. 87–90.DOI : 10.1145/2578153.2578164.
2013
[60] How to build an information time machine
The Venice Time Machine project aims at building a multidimensional model of Venice and its evolution covering a period of more than 1000 years. Kilometers of archives are currently being digitized, transcribed and indexed setting the base of the largest database ever created on Venetian documents. Millions of photos are processed using machine vision algorithms and stored in a format adapted to high performance computing approaches. In addition to these primary sources, the content of thousands of monographs are indexed and made searchable. The information extracted from these diverse sources is organized in a semantic graph of linked data and unfolded in space and time as part of an historical geographical information system, based on high-resolution scanning of the city itself.
TEDxCaFoscariU, Venice, Italy, June, 2013.[59] A social network analysis of Rousseau’s autobiography “Les Confessions”
We propose an analysis of the social network composed of the characters appearing in Jean-Jacques Rousseau's autobiographic Les Confessions, with existence of edges based on co-occurrences. This work consists of twelve volumes, that span over fifty years of his life. Having a unique author allows us to consider the book as a coherent work, unlike some of the historical texts from which networks often get extracted, and to compare the evolution of patterns of characters through the books on a common basis. Les Confessions, considered as one of the first modern autobiographies, has the originality to let us compose a social network close to the reality, only with a bias introduced by the author, that has to be taken into account during the analysis. Hence, with this paper, we discuss the interpretation of networks based on the content of a book as social networks. We also, in a digital humanities approach, discuss the relevance of this object as an historical source and a narrative tool.
2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013.[58] Analyse de réseaux sur les Confessions de Rousseau
2013. Humanités délivrées , Lausanne, Switzerland , October 1-2, 2013.[57] Les "Big data" du passé
Les sciences humaines sont sur le point de vivre un bouleversement comparable à celui qui a frappé la biologie dans les trente dernières années. Cette révolution consiste essentiellement en un changement d’échelle dans l’ambition et la taille des projets de recherche. Nous devons former une nouvelle génération de jeunes chercheurs préparés pour cette transformation.
Bulletin SAGW
2013
[56] Expanding Eye-Tracking Methods to Explain the Socio-Cognitive Effects of Shared Annotations
Social technologies are leading to transformations in education, by empowering the way learners connect to each other, by introducing new means for teaching and learning and by reshaping the way knowledge is delivered. Annotating texts is a learning strategy that is spontaneously adopted by learners during lectures or while reading text books. Students still individually take annotations on paper, that tend to be thrown away when the learning goal is achieved. Students also engage occasionally in the spontaneous practice of note sharing. Our work explores experimentally the benefits of note-taking and note-sharing behaviour. First, we study how sharing student-annotated instructional texts can improve learning, enriching an experimental approach with a new eye tracking method. Second, we conduct experiments on computer-mediated note sharing in the classroom. Our results demonstrate the virtuous circle of note-taking: both annotating while reading and reading annotated documents lead to a better learning achievement. In the first experimental study, we measure if the presence of annotations, on an instruc- tional text, can influence the reading pattern, and how visual features of annotations can elicit the reader’s attention. To complement the results concerning learning and reading comprehension, we look into the readers’ gaze patterns, to explain how the students’ learning outcome relates to reading annotated texts. For this purpose, we design a novel eye tracker, that can be used to study reading and note taking in unconstrained experimental settings. This eye tracking system is based on a systematic error correction procedure that exploits the appearance similarity between the annotated texts and the spatial distribution of the fixation points. We show that this method can extract accurate gaze measures without introducing experimental constraints, that could disturb the note taking process and affect the readers’ comprehension. In the second study, we move from a controlled experimental setting to the classroom. We discuss how the use of technology can facilitate a spontaneous transition from personal to shared annotations and support students in the learning process. We complement the analysis by reporting a friendship bias in browsing the shared annotated material. We further speculate on the potential of shared annotations in triggering adaptations of the instructional material and teaching workflow. These two studies provided an insightful understanding of the effects of student-generated annotations on reading comprehension, and the underlying impact on a community of learners. The obtained findings should inspire further experimentation on social learning environments, meant to facilitate knowledge sharing through shared annotations and their diffusion within educational institutions.
Lausanne, EPFL, 2013.DOI : 10.5075/epfl-thesis-5917.
[55] The practical confrontation of engineers with a new design endeavour: The case of digital humanties
This chapter shows some of the practices of engineers use when they are confronted to completely new situations, when they enter into an emerging field where methods and paradigms are not yet stabilized. Following the engineers here would help to shed light on their practices when they are confronted to new fields and new interlocutors. This is the case for engineers and computer scientists who engage themselves with human and social sciences to imagine, design, develop and implement digital humanities (DH) with specific hardware, software and infrastructure.
Engineering Practice in a Global Context; London, UK: CRC Press,2013.
p. 61-78.[54] Le cercle vertueux de l'annotation
Annoter est bon pour la compréhension immédiate du lecteur. Lire des textes annotés permet de mieux les comprendre. Cette double pertinence de l’annotation, confirmée par l’expérience, peut expliquer son succès séculaire.
Le lecteur à l'oeuvre; Gollion, Suisse: Infolio,2013.
p. 57-68.[53] Dyadic pulsations as a signature of sustainability in correspondence networks
2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013.[52] Are Google’s linguistic prosthesis biased towards commercially more interesting expressions? A preliminary study on the linguistic effects of autocompletion algorithms.
Google's linguistic prosthesis have become common mediators between our intended queries and their actual expressions. By correcting a mistyped word or extending a small string of letters into a statistically plausible continuation, Google offers a valuable service to users. However, Google might also be transforming a keyword with no or little value into a keyword for which bids are more likely. Since Google's word bidding algorithm accounts for most of the company's revenues, it is reasonable to ask whether linguistic prosthesis are biased towards commercially more interesting expressions. This study describes a method allowing for progressing in this understanding. Based on an optimal experiment design algorithm, we are reconstructing a model of Google's autocompletion and value assignment functions. We can then explore and question the various possible correlations between the two functions. This is a first step towards the larger goal of understanding how Google's linguistic economy impacts natural language.
2013. Digital Humanities 2013 , Lincoln, Nebraska, USA , July 15-19, 2013. p. 245-248.[51] Living With a Vacuum Cleaning Robot - A 6-month Ethnographic Study
Little is known about the usage, adoption process and long-term effects of domestic service robots in people’s homes. We investigated the usage, acceptance and process of adoption of a vacuum cleaning robot in nine households by means of a six month ethnographic study. Our major goals were to explore how the robot was used and integrated into daily practices, whether it was adopted in a durable way, and how it impacted its environment. We studied people’s perception of the robot and how it evolved over time, kept track of daily routines, the usage patterns of cleaning tools, and social activities related to the robot. We integrated our results in an existing framework for domestic robot adoption and outlined similarities and differences to it. Finally, we identified several factors that promote or hinder the process of adopting a domestic service robot and make suggestions to further improve human-robot interactions and the design of functional home robots toward long-term acceptance.
International Journal of Social Robotics
2013
DOI : 10.1007/s12369-013-0190-2
2012
[50] Interactive device and method for transmitting commands from a user
According to the present invention, it is provided an interactive device comprising a display, a camera, an image analyzing means, said interactive device comprising means to acquire an image with the camera, the analyzing means detecting at least a human face on the acquired image and displaying on the display at least a pattern where the human face was detected wherein the interactive device further comprises means to determine a halo region extending at least around the pattern and means to add into the halo region at least one interactive zone related to a command, means to detect movement onto the interactive zone and means to execute the command by said device.
2012.
US2009208052
EP2090961
[49] L'ordinateur du XXIe siècle sera un robot
Et l'Homme créa le robot; Paris: Musée des Arts et Métiers / Somogy éditions d'Art,2012.
[48] La bibliothèque comme interface physique de découverte et lieu de curation collective
Une bibliothèque est toujours un volume organisé en deux sous-espaces : une partie publique (front-end) avec laquelle les usages peuvent interagir, une partie cachée (back-end) utilisée pour la logistique et le stockage. À la Bibliothèque Nationale de France, c’est un système robotisé qui fait la jonction entre les espaces immenses et sous-terrains ouverts au public et les quatre tours qui stockent les livres. L’architecte Dominique Perrault a imaginé une vertigineuse bibliothèque-machine où la circulation des hommes a été pensée symétriquement à la circulation des livres ...
Documentaliste - Sciences de l'information
2012
[47] Can a Table Regulate Participation in Top Level Managers' Meetings?
We present a longitudinal study on the participation regulation effects in the presence of a speech aware interactive table. This study focuses on training meetings of groups of top level managers, whose compositions do not change, in a corporate organization. We show that an effect of balancing participation develops over time. We also report other emerging group-specific features such as interaction patterns and signatures, leadership effects, and behavioral changes between meetings. Finally we collect feedback from the participants and analyze qualitatively the human and social aspects of the participants interaction mediated by the technology.
2012. International Conference on Supporting Group Work GROUP'12 , Sanibel, Florida, USA , October 27-31, 2012.[46] Supporting opportunistic search in meetings with tangible tabletop
Web searches are often needed in collocated meetings. Many research projects have been conducted for supporting collaborative search in information-seeking meetings, where searches are executed both intentionally and intensively. However, for most common meetings, Web searches may happen randomly with low-intensity. They neither serve as main tasks nor major activities. This kind of search can be referred to as opportunistic search. The area of opportunistic search in meetings has not yet been studied. Our research is based upon this motivation. We propose an augmented tangible tabletop system with a semi-ambient conversation-context-aware surface as well as foldable paper browsers for supporting opportunistic search in collocated meetings. In this paper, we present our design of the system and initial findings.
2012. the 2012 ACM annual conference extended abstracts , Austin, Texas, USA , 05-10 05 2012.DOI : 10.1145/2212776.2223837.
[45] How books will become machines
This article is an attempt to reframe the evolution of books into a larger evolutionary theory. A central concept of this theory is the notion of regulated representation. A regulated representation is governed by a set of production and usage rules.ur core hypothesis is that regulated representations get more regular over time. The general process of this regulating tendency is the transformation of a convention into a mechanism. The regulation usually proceeds in two consecutive steps, firstly mechanizing the representation production rules and secondly its conventional usages. Ultimately, through this process, regulated representations tend to become machines.
Lire demain : Des manuscrits antiques à l’ère digitale; Lausanne: PPUR,2012.
p. 27-44.[44] Hands-on Symmetry with Augmented Reality on Paper
Computers have been trying to make their way into education, because they can allow learners to manipulate abstract notions and explore problem spaces easily. However, even with the tremendous potential of computers in education, their integration into formal learning has had limited success. This may be due to the fact that computer interfaces completely rupture the existing tools and curricula. We propose paper interfaces as a solution. Paper interfaces can be manipulated and annotated yet still maintain the processing power and dynamic displays of computers. We focus on geometry, which allows us to fully harness these two interaction modalities: for example, cutting a complex paper shape into simpler forms shows how to compute an area. We use a camera-projector system to project information on pieces of paper detected with a 2D barcode. We developed and experimented with several activities based on this system for geometry learning, however, we focus our presentation on one activity addressing symmetry. This activity is based on a sheet where a part of its content is scanned, and then reprojected according to one or more symmetry axes. Such a sheet is used to illustrate, in real time, how a symmetric drawing is constructed. Anything in the input area can be reflected: ink paper shapes, or physical objects. We show how the augmented sheets provide an easy solution for teachers to develop their own augmented reality activities by reporting on the collaboration with three teachers. These teachers successfully used the activities in their classes, integrating them in the normal course of their teaching. We also relate how paper interfaces let pupils express their creativity while working on geometry.
2012. 9th International Conference on Hands-on Science , Antalya, Turkey , October 16 - 21, 2012.[43] Paper Interfaces to Support Pupils and Teachers in Geometry
Digital Ecosystems for Collaborative Learning: Embedding Personal and Collaborative Devices to Support Classrooms of the Future (DECL). Workshop in the International Conference of the Learning Sciences (ICLS), Sydney, Australia, July 2.[42] Tangible Paper Interfaces: Interpreting Pupils' Manipulations
Paper interfaces merge the advantages of the digital and physical world. They can be created using normal paper augmented by a camera+projector system. They are particularly promising for applications in education, because paper is already fully integrated in the classroom, and computers can augment them with a dynamic display. However, people mostly use paper as a document, and rarely for its characteristics as a physical body. In this article, we show how the tangible nature of paper can be used to extract information about the learning activity. We present an augmented reality activity for pupils in primary schools to explore the classification of quadrilaterals based on sheets, cards, and cardboard shapes. We present a preliminary study and an in-situ, controlled study, making use of this activity. From the detected positions of the various interface elements, we show how to extract indicators about problem solving, hesitation, difficulty levels of the exercises, and the division of labor among the groups of pupils. Finally, we discuss how such indicators can be used, and how other interfaces can be designed to extract different indicators.
2012. Interactive Tabletops and Surfaces 2012 Conference , Cambridge, Massachusetts, USA , November 11-14 2012.[41] Paper Interfaces for Learning Geometry
Paper interfaces offer tremendous possibilities for geometry education in primary schools. Existing computer interfaces designed to learn geometry do not consider the integration of conventional school tools, which form the part of the curriculum. Moreover, most of computer tools are designed specifically for individual learning, some propose group activities, but most disregard classroom-level learning, thus impeding their adoption. We present an augmented reality based tabletop system with interface elements made of paper that addresses these issues. It integrates conventional geometry tools seamlessly into the activity and it enables group and classroom-level learning. In order to evaluate our system, we conducted an exploratory user study based on three learning activities: classifying quadrilaterals, discovering the protractor and describing angles. We observed how paper interfaces can be easily adopted into the traditional classroom practices.
2012. 7th European Conference on Technology Enhanced Learning , Saarbrücken, Germany , September 18-21, 2012. p. 37-50.DOI : 10.1007/978-3-642-33263-0_4.
[40] Anthropomorphic Language in Online Forums about Roomba, AIBO and the iPad
What encourages people to refer to a robot as if it was a living being? Is it because of the robot’s humanoid or animal-like shape, its movements or rather the kind of inter- action it enables? We aim to investigate robots’ characteristics that lead people to anthropomorphize it by comparing different kinds of robotic devices and contrasting it to an interactive technology. We addressed this question by comparing anthro- pomorphic language in online forums about the Roomba robotic vacuum cleaner, the AIBO robotic dog, and the iPad tablet computer. A content analysis of 750 postings was carried out. We expected to find the highest amount of anthropomorphism in the AIBO forum but were not sure about how far people referred to Roomba or the iPad as a lifelike artifact. Findings suggest that people anthropomorphize their robotic dog signifi- cantly more than their Roomba or iPad, across different topics of forum posts. Further, the topic of the post had a significant impact on anthropomorphic language.
2012. The IEEE International Workshop on Advanced Robotics and its Social Impacts (ARSO 2012) , Technische Universität München, Munich, Germany , May 21-23, 2012. p. 54-59.DOI : 10.1109/ARSO.2012.6213399.
2011
[39] Métaphores machinales
Au fil des siècles, l'homme se voit comme une machine successivement hydropneumatique, mécanique, électrique et aujourd'hui numérique. Chaque nouvelle invention offre une nouvelle perspective sur le vivant sans jamais toutefois être complètement satisfaisante. Il reste toujours "quelque chose" qui semble difficilement réductible à un mécanisme et pour beaucoup ce quelque chose que nous ne voyons que par différence, fait le propre de l'homme.
L'Homme-machine et ses avatars. Entre science, philosophie et littérature - XVIIe-XXIe siècles; Vrin,2011.
p. 237-240.[38] From hardware and software to kernels and envelopes: a concept shift for robotics, developmental psychology, and brain sciences
Neuromorphic and Brain-Based robots; Cambridge: Cambridge University Press,2011.
p. 217-250.[37] L'homme, l'animal et la machine : Perpétuelles redéfinitions
Les animaux ont-ils une conscience ? Les machines peuvent-elles se montrer intelligentes ? Chaque nouvelle découverte des biologistes, chaque progrès technologique nous invite à reconsidérer le propre de l’homme. Ce livre, fruit de la collaboration entre Georges Chapouthier, biologiste et philosophe de la biologie, et Frédéric Kaplan, ingénieur spécialiste de l’intelligence artificielle et des interfaces homme-machine, fait le point sur les multiples manières dont les animaux et les machines peuvent être comparés aux êtres humains. Après un panorama synthétique des capacités des animaux et des machines à apprendre, développer une conscience, ressentir douleur ou émotion, construire une culture ou une morale, les auteurs détaillent ce qui nous lie à nos alter-egos biologiques ou artificiels : attachement, sexualité, droit, hybridation. Au-delà, ils explorent des traits qui semblent spécifiquement humains – l’imaginaire, l’âme ou le sens du temps – mais pour combien de temps encore… Une exploration stimulante au coeur des mystères de la nature humaine, qui propose une redéfinition de l’homme dans son rapport à l’animal et à la machine.
CNRS Editions, Paris.[36] HRI in the home: A Longitudinal Ethnographic Study with Roomba
Personal service robots, such as the iRobot Roomba vacuum cleaner provide a promising opportunity to study human-robot interaction (HRI) in domestic environments. Still rather little is known about long-term impacts of robotic home appliances on people’s daily routines and attitudes and how they evolve over time. We investigate these aspects through a longitudinal ethnographic study with nine households, to which we gave a Roomba cleaning robot. During six months, data is gathered through a combination of qualitative and quantitative methods.
1st Symposium of the NCCR robotics, Zürich, Switzerland, June 16, 2011.[35] Roomba is not a Robot; AIBO is still Alive! Anthropomorphic Language in Online Forums
Anthropomorphism describes people’s tendency to ascribe humanlike qualities to non-human artifacts, such as robots. We investigated anthropomorphic language in 750 posts of online forums about the Roomba robotic vacuum cleaner, the AIBO robotic dog and the iPad tablet computer. Results of this content analysis suggest a significant difference for anthropomorphic language usage among the three technologies. In contrast to Roomba and iPad, the specific characteristics of the robotic dog enhanced a more social interaction and lead people to use considerably more anthropomorphic language.
3rd International Conference on Social Robotics, ICSR 2011, Amsterdam, The Netherlands, November 24-25, 2011.[34] People's Perception of Domestic Service Robots: Same Household, Same Opinion?
The study presented in this paper examined people’s perception of domestic service robots by means of an ethnographic study. We investigated initial reactions of nine households who lived with a Roomba vacuum cleaner robot over a two week period. To explore people’s attitude and how it changed over time, we used a recurring questionnaire that was filled at three different times, integrated in 18 semi-structured qualitative interviews. Our findings suggest that being part of a specific household has an impact how each individual household member perceives the robot. We interpret that, even though individual experiences with the robot might differ from one other, a household shares a specific opinion about the robot. Moreover our findings also indicate that how people perceived Roomba did not change drastically over the two week period.
2011. 3rd International Conference on Social Robotics , Amsterdam, The Netherlands , November 24-25, 2011. p. 204-213.DOI : 10.1007/978-3-642-25504-5.
[33] Classroom orchestration : The third circle of usability
We analyze classroom orchestration as a question of usability in which the classroom is the user. Our experiments revealed design features that reduce the global orchestration load. According to our studies in vocational schools, paper-based interfaces have the potential of making educational workflows tangible, i.e. both visible and manipulable. Our studies in university classes converge on minimalism: they reveal the effectiveness o tools that make visible what is invisible but do not analyze, predict or decide for teachers. These studies revealed a third circle of usability. The first circle concerns individual usability (HCI). The second circle is about design for teams (CSCL/CSCW). The third circle raises design choices that impart visibility, reification and minimalism on classroom orchestration. The fact that a CSCL environment allows or not students to look at what the next team is doing (e.g. tabletops versus desktops) illustrates the third circle issues that are important for orchestration.
2011. 9th International Conference on Computer Supported Collaborative Learning , Hong Kong, China , July 4-8, 2011. p. 510-517.[32] A 99 Dollar Head-Mounted Eye Tracker
Head-mounted eye-trackers are powerful research tools to study attention processes in various contexts. Most existing commercial solutions are still very expensive, limiting the current use of this technology. We present a hardware design to build, at low cost, a camera-based head-mounted eye tracker using two cameras and one infrared LED. A Playstation Eye camera (PEye) is fixed on an eyeglasses frame and positioned under one eye to track its movements. The filter of the PEye is replaced by another one (Optolite 750nm) that blocks the visible light spectrum. The focal length of the PEye needs to be re-adjusted in order to obtain a sharp image of the eye. This is done by increasing the distance between the charge coupled device (CCD) and the lens by a few millimeters. One IR-LED (Osram SFH485P) is installed near the PEye lens to impose an artificial infrared lighting which produces the so-called "dark pupil effect". This is done while respecting the Minimum Safe Working Distance. We positioned a second camera on the front side of the eyeglasses frame. Preliminary applicative tests indicate an accuracy of approximately one degree of visual angle, which makes this tool relevant for many eye-tracking projects.
In F. Vitu, E. Castet, & L. Goffart (Eds.), Abstracts of the 16th European Conference on Eye Movements. Presented at the ECEM, Marseille., Marseille, August 21-25, 2011.[31] Producing and Reading Annotations on Paper Documents: a geometrical framework for eye-tracking studies
The printed textbook remains the primary medium for studying in educational systems. Learners use personal annotation strategies while reading. These practices play an important role in supporting working memory, enhancing recall and influencing attentional processes. To be able to study these cognitive mechanisms we have designed and built a lightweight head mounted eye-tracker. Contrary to many eye trackers that require the readers head to stay still, our system permits complete freedom of movement and thus enables to study reading behaviors as if they were performed in everyday life. To accomplish this task we developed a geometrical framework to determine the localization of the gaze on a flattened document page. The eye tracker embeds a dual camera system which synchronously records the reader's eye movements and the paper document. The framework post-processes these two video streams. Firstly it performs a monocular 3D-tracking of the human eyeball to infer a plausible 3d gaze trajectory. Secondly it applies a feature point based method to recognize the document page and estimate its planar pose robustly. Finally it disambiguates their relative position optimizing the system parameters. Preliminary tests show that the proposed method is accurate enough to obtain reliable fixations on textual elements.
Symposium N°13: Interacting with electronic and mobile media: Oculomotor and cognitive effects. In F. Vitu, E. Castet, & L. Goffart (Eds.), Abstracts of the 16th European Conference on Eye Movements. Presented at the ECEM, Marseille., Marseille, August 21-25, 2011.[30] Paper Interface Design for Classroom Orchestration
Designing computer systems for educational purpose is a difficult task. While many of them have been developed in the past, their use in classrooms is still scarce. We make the hypothesis that this is because those systems take into account the needs of individuals and groups, but ignore the requirements inherent in their use in a classroom. In this work, we present a computer system based on a paper and tangible interface that can be used at all three levels of interaction: individual, group, and classroom. We describe the current state of the interface design and why it is appropriate for classroom orchestration, both theoretically and through two examples for teaching geometry.
CHI, Vancouver, BC, Canada, May 7-12, 2011.[29] Cognitive and social effects of handwritten annotations
This article first describes a method for extracting and classifying handwritten annotations on printed documents using a simple camera integrated in a lamp. The ambition of such a research is to offer a seamless integration of notes taken on printed paper in our daily interactions with digital documents. Existing studies propose a classification of annotations based on their form and function. We demonstrate a method for automating such a classification and report experimental results showing the classification accuracy. In the second part of the article we provide a road map for conducting user-centered studies using eye-tracking systems aiming to investigate the cognitive roles and social effects of annotations. Based on our understanding of some research questions arising from this experiment, in the last part of the article we describe a social learning environment that facilitates knowledge sharing across a class of students or a group of colleagues through shared annotations.
Red-conference, rethinking education in the knowledge society, Monte Verità, Switzerland, March 7-10, 2011.2010
[28] Man-machine interface method executed by an interactive device
The aim of the present invention is to provide a cost effective solution for a man-machine interface without physical contact with the device to which a command should be given. This aim is achieved by a Man-machine interface method executed by an interactive device, comprising a display, an infra-red camera, an infra-red illumination system, this method executing the following steps : - capturing a first image by the camera with infra-red illumination - capturing a second image by the camera without infra-red illumination - Subtracting the two images to create a difference image - Creating a movement map using a high-pass filter to filter-out static part on a plurality of difference images - computing a barycentre of at least one region of the movement map, - Assigning at least one cursor based on the position of the barycentre movement, - Modifying information on the display based on the position of this cursor..
2010.
[27] Vocal Sticky Notes: Adding Audio Comments on Paper Documents
In this paper we present a tool to annotate paper documents with vocal comments. This tool does not require specially processed documents, and allows natural and simple interactions: sticking a note to add a comment, and place an object on it to listen to the record. A pilot experiment in which teachers used this tool to annotate reports revealed that vocal comments require an extra effort compared to writing. We discuss future work that could either fix or take advantage of this extra effort.
28th ACM Conference on Human Factors in Computing Systems, Atlanta, Georgia, USA, April 10–15, 2010.[26] A Paper Interface for Code Exploration
We describe Paper Code Explorer, a paper based interface for code exploration. This augmented reality system is designed to offer active exploration tools for programmers confronted with the problem of getting familiar with a large codebase. We first present an initial qualitative study that proved to be useful for informing the design of this system and then describe its main characteristics. As discussed in the conclusion, paper has many intrinsic advantages for our application.
12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, September 26-29, 2010.[25] Extraction and Classification of Handwritten Annotations
This article describes a method for extracting and classifying handwritten annotations on printed documents using a simple camera integrated in a lamp or a mobile phone. The ambition of such a research is to offer a seamless integration of notes taken on printed paper in our daily interactions with digital documents. Existing studies propose a classification of annotations based on their form and function. We demonstrate a method for automating such a classification and report experimental results showing the classification accuracy.
1st International Workshop on Paper Computing, a Ubicomp 2010 workshop (PaperComp 2010), Copenhagen, Denmark, September 26, 2010.[24] An Interactive Table for Supporting Participation Balance in Face-to-Face Collaborative Learning
We describe an interactive table designed for supporting face-to-face collaborative learning. The table, Reflect, addresses the issue of unbalanced participation during group discussions. By displaying on its surface a shared visualization of member participation, Reflect is meant to encourage participants to avoid the extremes of over- and under-participation. We report on a user study that validates some of our hypotheses on the effect the table would have on its users. Namely we show that Reflect leads to more balanced collaboration, but only under certain conditions. We also show different effects the table has on over- and under-participators.
IEEE Transactions on Learning Technologies
2010
DOI : 10.1109/TLT.2010.18
[23] Low-Resolution Ambient Awareness Tools for Educational Support
We examine an approach in technology-enhanced learning that avoids deviation from existing pedagogical practices as these are often reluctant to change. This is accomplished by designing technology to augment learning activities that are already in common practice. We implemented two ambient awareness tools, Lantern and Reflect, in line with this approach. The former is tailored for recitation sections and improves student productivity while the latter promotes participation balance in face-to-face collaboration. Both devices allow very limited interaction and provide low-resolution feedback, keeping the actual learning tasks at the center of the student's focus. We show that the approach we examine coupled with this simple design makes these tools effective and easy to adopt.
CHI 2010 Workshop: The Future of HCI And Education, Atlanta, Georgia, USA, April 11, 2010.2009
[22] Integration, incorporation, interface: L'evolution des systèmes techniques
Cahiers de l'Institut de la Methode
2009
[21] Distributed Awareness for Class Orchestration
The orchestration process consists of managing classroom interactions at multiple levels: individual activities, teamwork and class-wide sessions. We study the process of orchestration in recitation sections, i.e. when students work on their assignments individually or in small groups with the presence of teaching assistants who give help on demand. Our empirical study revealed that recitation sections suffer from inefficient orchestration. Too much attention is devoted to the management of the relationship between students and teaching assistants, which prevent both sides from concentrating on their main task. We present a model of students’ activities during recitation sections that emphasize the issue of mutual awareness, i.e. monitoring help needs and TA's availability. To tackle these difficulties, we developed two awareness tools. Both tools convey the same information: which exercise each group is working on, whether it has asked for help and for how long. In the centralized version, named Shelf, students provide information with a personal response system and the status of each team is juxtaposed on a central display. In the distributed version, named Lantern, each team provides information by interacting with a lamp placed on its table. The display is distributed over the classroom, the information being spatially associated to each group. We are now comparing these two versions in an empirical study with two first year undergraduate classes in Physics. Preliminary results show that both versions increase the efficiency of interaction between students and teaching assistants. This contribution focused on the distributed version.
2009. EC-TEL 2009, Synergy of Disciplines. , Nice , September 29th - October 2nd, 2009. p. 211-225.DOI : 10.1007/978-3-642-04636-0_21.
[20] On the Importance of Spatial Configuration of Information
The spatial layout of information influences collaborative interactions. We compared two awareness tools which give information on the status of participants in a collaborative work, one displaying it on a single screen and the other distributing it in the room.
2009. 11th International Conferenece on Ubiquitous Computing , Florida , September 30-October 3,2009.[19] Paper-based Concept Map: the Effects of Tabletop on an Expressive Collaborative Learning Task
Augmented tabletops have recently attracted considerable attention in the literature. However, little has been known about the effects that these interfaces have on learning tasks. In this paper, we report on the results of an empirical study that explores the usage of tabletop systems in an expressive collaborative learning task. In particular, we focus on measuring the difference in learning outcomes at individual and group levels between students using two interfaces: traditional computer and augmented tabletop with tangible input. No significant effects of the interface on individual learning gain were found. However, groups using traditional computer learned significantly more from their partners than those using tabletop interface. Further analysis showed an interaction effect of the condition and the group heterogeneity on learning outcomes. We also present our qualitative findings in terms of how group interactions and strategy differ in the two conditions.
2009. September 1-5, 2009. p. 149-158.[18] Multi-Finger Interactions with Papers on Augmented Tabletops
Although many augmented tabletop systems have shown the potential and usability of finger-based interactions and paper-based interfaces, they have mainly dealt with each of them separately. In this paper, we introduce a novel method aimed to improve human natural interactions on augmented tabletop systems, which enables multiple users to use both fingertips and physical papers as mediums for interaction. This method uses computer vision techniques to detect multi-fingertips both over and touching the surface in real-time regardless of their orientations. Fingertip and touch positions would then be used in combination with paper tracking to provide a richer set of interaction gestures that the users can perform in collaborative scenarios.
2009. 3rd International Conference on Tangible and Embedded Interaction (TEI 2009) , Cambridge, UK , February 16-18, 2009. p. 267–274.DOI : 10.1145/1517664.1517720.
[17] Interpersonal Computers for Higher Education
Interactive Artifacts and Furniture Supporting Collaborative Work and Learning; Springer US,2009.
p. 129-145.DOI : 10.1007/978-0-387-77234-9_8.
2008
[16] Le corps comme variable expérimentale
L’évolution des concepts de corps et de processus d’animation dans le domaine de la robotique conduit aujourd’hui à définir le concept d’un noyau, ensemble d’algorithmes stables, indépendant des espaces corporels auxquels ils s’appliquent. Il devient alors possible d’étudier la manière dont certaines inscriptions corporelles, considérées comme des variables, structurent le comportement et, à plus long terme, le développement d’un robot. Cette démarche méthodologique peut mener à une approche originale du développement soulignant l’importance d’un corps variable aux frontières en continuelle redéfinition.
Revue philosophique de la France et de l'etranger
2008
DOI : 10.3917/rphi.083.0287
[15] Reflect : An Interactive Table for Regulating Face-to-Face Collaborative Learning
In face-to-face collaborative learning, unbalanced participation often leads to the undersirable result of some participants experiencing lower learning outcomes than others. Providing feedback to the participants on the level of their participation could have a positive effect on their ability to self-regulate, leading to a more balanced collaboration. We propose a new approach for providing this feedback that takes the shape of a meeting table with a reactive visualization displayed on its surface. The meeting table monitors the collaborative interaction taking place around it using embedded microphones and displays a real- time feedback to the participants on an array of LEDs, inviting them to balance their collaboration. We report on an ongoing study that currently shows a positive effect our table has on group regulation.
Times of Convergence: Technologies Across Learning Contexts; Berlin / Heidelberg: Springer,2008.
p. 39-48.DOI : 10.1007/978-3-540-87605-2_5.
[14] Classification of dog barks: a machine learning approach
In this study we analyzed the possible context-specific and individual-specific features of dog barks using a new machine-learning algorithm. A pool containing more than 6,000 barks, which were recorded in six different communicative situations was used as the sound sample. The algorithm’s task was to learn which acoustic features of the barks, which were recorded in different contexts and from different individuals, could be distinguished from another. The program conducted this task by analyzing barks emitted in previously identified contexts by identified dogs. After the best feature set had been obtained (with which the highest identification rate was achieved), the efficiency of the algorithm was tested in a classification task in which unknown barks were analyzed. The recognition rates we found were highly above chance level: the algorithm could categorize the barks according to their recorded situation with an efficiency of 43% and with an efficiency of 52% of the barking individuals. These findings suggest that dog barks have context-specific and individual-specific acoustic features. In our opinion, this machine learning method may provide an efficient tool for analyzing acoustic data in various behavioral studies.
Animal Cognition
2008
DOI : 10.1007/s10071-007-0129-9
[13] Computational models in the debate over language learnability
Computational models have played a central role in the debate over language learnability. This article discusses how they have been used in different stances, from generative views to more recently introduced explanatory frameworks based on embodiment, cognitive development and cultural evolution. By digging into the details of certain specific models, we show how they organize, transform and rephrase defining questions about what makes language learning possible for children. Finally, we present a tentative synthesis to recast the debate using the notion of learning bias.
Infant and Child Development
2008
DOI : 10.1002/icd.544
2007
[12] Intrinsically motivated machines
Children seem intrinsically motivated to manipulate, to explore, to test, to learn and they look for activities and situations that provide such learning opportunities. Inspired by research in developmental psychology and neuroscience, some researchers have started to address the problem of designing intrinsic motivation systems. A robot controlled by such systems is able to autonomously explore its environment not to fulfill predefined tasks but driven by an incentive to search for situations where learning happens efficiently. In this paper, we present the origins of these intrinsically motivated machines, our own research in this novel field and we argue that intrinsic motivation might be a crucial step towards machines capable of life-long learning and open-ended development.
50 Years of AI, Festschrift; Springer Verlag,2007.
p. 304-315.DOI : 10.1007/978-3-540-77296-5_27.
[11] A tabletop display for self-regulation in face-to-face Meetings
IEEE TABLETOP 2007, Portland, RHODE ISLAND, USA, October 2007.[10] Docklamp: a portable projector-camera system
2nd IEEE TableTop workshop, Newport, RHODE ISLAND, USA, October 10-12 2007.[9] The science of laughter
The rediscovery of intelligence - 20 years of AI - in Zurich and world-wide; Zurich: AI-Lab Zurich,2007.
[8] The progress drive hypothesis: an interpretation of early imitation
Models and Mechanims of Imitation and Social Learning: Behavioural, Social and Communication Dimensions; Cambridge: Cambridge University Press.,2007.
p. 361-377.[7] Futur 2.0: Comprendre les 20 prochaines années
Comment vivrons-nous dans 20 ans ? Unique en son genre, ce beau livre illustré, accessible à tous, est le compagnon idéal des rêveurs de futur, de ceux qui souhaitent le comprendre pour mieux le construire. Sous forme d'une pédagogie du futur et grâce à un contenu riche de sens, Futur 2.0 sensibilise le lecteur à tout ce qui pourrait changer son quotidien dans les deux prochaines décennies. Comment les mutations technologiques amorcées vont-elles influer sur notre façon de se déplacer, travailler, se soigner, se cultiver, jouer, se nourrir, agir dans notre environnement, seul et avec les autres ? Chercheurs, philosophes, sociologues ou encore artistes exposent de façon claire et précise les enjeux socioculturels, économiques et écologiques de notre futur. Ils imaginent et racontent leur vision du monde de demain en soulevant un coin du rideau sur les défis et la créativité qui permettront à chacun de nous de construire notre avenir. Depuis 20 ans, le Futuroscope prône la pédagogie douce en faisant des désirs d'avenir et des envies de renouvellement de soi son territoire. Mêlant fiction, sciences et technologies avancées, cet ouvrage collectif et optimiste est une invitation à inventer, à ouvrir notre esprit, à diversifier nos connaissances, pour s'interroger ensemble sur les 20 prochaines années, tenter d'en dessiner les contours et réenchanter le futur.
Fyp editions.[6] Intrinsic Motivation Systems for Autonomous Mental Development
Exploratory activities seem to be intrinsicallyrewarding for children and crucial for their cognitive development. Can a machine be endowed with such an intrinsic motivation system? This is the question we study in this paper, presenting a number of computational systems that try to capture this drive towards novel or curious situations. After discussing related research coming from developmental psychology, neuroscience, developmental robotics and active learning, this article presents the mechanism of Intelligent Adaptive Curiosity, an intrinsic motivation system which pushes a robot towards situations in which it maximizes its learning progress. This drive makes the robot focus on situations which are neither too predictable nor too unpredictable thus permitting autonomous mental development. The complexity of the robot's activities autonomously increases and complex developmental sequences self-organize without being constructed in a supervised manner. Two experiments are presented illustrating the stage-like organization emerging with this mechanism. In one of them, a physical robot is placed on a baby playmat with objects that it can learn to manipulate. Experimental results show that the robot first spends time in situations which are easy to learn, then shifts its attention progressively to situations of increasing difficulty, avoiding situations in which nothing can be learnt. Finally, these various results are discussed in relation to more complex forms of behavioural organization and data coming from developmental psychology.
IEEE Transactions on Evolutionary Computation
2007
DOI : 10.1109/TEVC.2006.890271
[5] Un robot motivé pour apprendre : le role des motivations intrinseques dans le developpement sensorimoteur
Cet article présente des travaux récents qui illustrent comment un robot doté d’un système de motivation intrinsèque peut explorer son environnement et apprendre une succession de tâches qui n’ont pas été spécifiées par son programmeur. Un programme générique contrôle le robot et le pousse à rechercher des situations où ses progrès en prédiction sont maximaux. Ces situations, que l’on appelle " niches de progrès ", dépendent des opportunités présentes dans l’environnement mais aussi de la morphologie, des contraintes cognitives spécifiques, et de l’expérience passée du robot. Des premiers résultats ont été obtenus dans le domaine de la locomotion, de la découverte des " affordances ", et des échanges prélinguistiques. Dans chacune de ces expériences, le robot explore les situations " intéressantes " de son point de vue par rapport à ses capacités d’apprentissage et les contraintes de son espace sensorimoteur. L’article discute les résultats de ces premières expériences et conclut sur la possibilité de fournir en retour aux neurosciences et à la psychologie, inspiratrices de ces travaux en robotique, de nouvelles pistes de réflexions et de nouveaux concepts pour penser les processus de développement chez l’enfant.
Enfance
2007
[4] Language Evolution as a Darwinian Process: Computational Studies
This paper presents computational experiments that illustrate how one can precisely conceptualize language evolution as a Darwinian process. We show that there is potentially a wide diversity of replicating units and replication mechanisms involved in language evolution. Computational experiments allow us to study systemic properties coming out of populations of linguistic replicators: linguistic replicators can adapt to specific external environments; they evolve under the pressure of the cognitive constraints of their hosts, as well as under the functional pressure of communication for which they are used; one can observe neutral drift; coalitions of replicators may appear, forming higher level groups which can themselves become subject to competion and selection.
Cognitive Processing
2007
DOI : 10.1007/s10339-006-0158-3
[3] In search of the neural circuits of intrinsic motivation
Children seem to acquire new know-how in a continuous and open-ended manner. In this paper, we hypothesize that an intrinsic motivation to progress in learning is at the origins of the remarkable structure of children’s developmental trajectories. In this view, children engage in exploratory and playful activities for their own sake, not as steps toward other extrinsic goals. The central hypothesis of this paper is that intrinsically motivating activities correspond to expected decrease in prediction error. This motivation system pushes the infant to avoid both predictable and unpredictable situations in order to focus on the ones that are expected to maximize progress in learning. Based on a computational model and a series of robotic experiments, we show how this principle can lead to organized sequences of behavior of increasing complexity characteristic of several behavioral and developmental patterns observed in humans.We then discuss the putative circuitry underlying such an intrinsic motivation system in the brain and formulate two novel hypotheses. The first one is that tonic dopamine acts as a learning progress signal. The second is that this progress signal is directly computed through a hierarchy of microcortical circuits that act both as prediction and metaprediction systems.
Frontiers in Neuroscience
2007
DOI : 10.3389/neuro.01/1.1.017.2007
2006
[2] Information-theoretic framework for unsupervised activity classification
This article presents a mathematical framework based on information theory to compare multivariate sensory streams. Central to this approach is the notion of configuration: a set of distances between information sources, statistically evaluated for a given time span. As information distances capture simultaneously effects of physical closeness, intermodality, functional relationship and external couplings, a configuration can be interpreted as a signature for specific patterns of activity. This provides ways for comparing activity sequences by viewing them as points in an activity space. Results of experiments with an autonomous robot illustrate how this framework can be used to perform unsupervised activity classification.
Advanced Robotics
2006
DOI : 10.1163/156855306778522514
2005
[1] Simple models of distributed co-ordination
Distributed co-ordination is the result of dynamical processes enabling independent agents to coordinate their actions without the need of a central co-ordinator. In the past few years, several computational models have illustrated the role played by such dynamics for self-organizing communication systems. In particular, it has been shown that agents could bootstrap shared convention systems based on simple local adaptation rules. Such models have played a pivotal role for our understanding of emergent language processes. However, only few formal or theoretical results have been published about such systems. Deliberately simple computational models are discussed in this paper in order to make progress in understanding the underlying dynamics responsible for distributed coordination and the scaling laws of such systems. In particular, the paper focuses on explaining the convergence speed of those models, a largely under-investigated issue. Conjectures obtained through empirical and qualitative studies of these simple models are compared with results of more complex simulations and discussed in relation to theoretical models formalized using Markov chains, game theory and Polya processes.
Connection Science
2005
DOI : 10.1080/09540090500177596
Autres publications
Enseignement & Phd
Enseignement
Digital Humanities
Architecture