The Republic of Turkey celebrated its 100th anniversary in October 2023. This century-long period also reveals the century-long memory of Türkiye’s territorial borders. Despite many problems such as diplomatic problems, terrorism, water, and territorial disputes over the past decades, Türkiye’s borders have remained stable. However, the first quarter of the 21st century has proven that borders, like many other things, can transform.
Turkey has been hosting 4 million Syrians since 2010. In addition, hundreds of thousands of irregular migrants from Afghanistan, Pakistan, and African countries use Türkiye as a target and transit country. Like most states around the world, Turkey has sought a solution to this problem (!) at its borders and has built a total of 1,160 km of integrated physical border security system on its borders in the last 15 years. The system includes the highest level of technological elements such as lighting, motion and heat sensors, electro-optic towers, thermal cameras, drones, and unmanned aerial vehicles. This technological layer created against migrants at the border is, in a sense, re-bordering territorial borders with a century-old memory. Within the scope of the study, technologically-centered border walls on Türkiye’s borders with Syria, Iran, and Iraq will be discussed and analyzed, including field observations, which will be included in the presentation.
Digital platforms and biometrics are increasingly deployed to support EU processes and practices which aim to regulate mobility. On the one hand, member states, as the end users of these systems, are required to develop and implement complex technologies, including the collection and sharing of biometric data across state authorities(immigration, law enforcement). On the other hand, biometrics shift the focus of control from physical borders to the bodies of migrants and travellers themselves (Rygiel, 2011), who become easily (re)-identifiable, as their biometric identities become entangled with a variety of law enforcement goals.
This article examines aspects of the digitalisation of the asylum procedures in Greece and the evolution and consolidation of hotspot approach, in light of the new EU Pact on Asylum and Migration. Building on policy analysis and fieldwork notes collected between 2022 and 2024, it argues that new technologies are not only preventing people from accessing asylum but a host of other rights, and work in tandem with other racialised and bureaucratic tools to further criminalise asylum seekers.
Objects are not merely functional; they act as signs carrying cultural, personal and emotional meanings. The way space is organized (or disorganized) communicates meaning. This paper critically engages with the scholarship on “home away from home,” interrogating the complexities of homemaking within migration literature. While home is increasingly understood as a dynamic process rather than a fixed position, this study examines how mobile Indian men navigate the tensions between movement and settlement, particularly in the context of occupational relocations. The research foregrounds the role of material culture in shaping and reflecting non-Western masculine identities, exploring how domestic objects mediate emotions, belonging and embodiment in transitory living conditions.
Employing the theoretical lens of “temporal materialities” and “object attachments”, this study draws on thematic analysis of interviews, participant-generated photographs and researcher-generated drawing observations to investigate the evolving relationships between mobile men and their material objects. Findings contribute to a more nuanced understanding of home-making among mobile populations, moving beyond simplistic binaries of permanent vs. temporary, masculine vs. feminine, and private vs. public. Through an analysis of object biographies, the paper identifies three key themes—blending tradition and modernity, adaptability and multifunctionality and personal expression through material interactions, that illuminate the affective and embodied dimensions of mobility.
Migration disrupts traditional identity structures, but objects help maintain continuity. By centring the emotional entanglements of homemaking, this study contributes to anthropological discussions on migration, identity and materiality, offering a new perspective on how men construct and maintain a sense of home in motion.
The China- Nepal border, spanning remote Himalayan terrain, has long been a conduit for trade, pilgrimage, and migration. Recently, it has become a heavily monitored zone – what this paper calls the Himalayan Firewall – where physical barriers merge with advanced surveillance technologies, raising concerns over human rights and freedom of movement.
China has intensified border surveillance, employing facial recognition, drones, satellite monitoring, and AI-driven tools to track cross-border movements. These systems disproportionately affect Tibetan refugees, many of whom risk dangerous crossings into Nepal to escape political repression. Digital surveillance, coupled with Nepal’s growing political alignment with China, has drastically reduced successful refugee escapes, leading to forced reparations despite international protections.
Beyond physical borders, surveillance extends into digital spaces, targeting Tibetan communities in Nepal. Cultural and political activities are closely monitored, limiting freedom of expression and assembly. Yet, technology also offers tools for resistance – refugee networks use encrypted apps, GPS mapping, and social media to coordinate crossings and document abuses.
The Himalayan Firewall reflects global trends in border control, where digital surveillance exacerbates inequalities and undermines human rights. This paper calls for transparency, accountability, and adherence to international legal standards to ensure border technologies respect human dignity and freedom.
Keywords: Surveillance, Tibetan Refugees, Human Rights, Sino-Nepalese Border, Digital Governance
Sanctuary cities worldwide often claim to support precaritised migrants residing in their jurisdiction as a reaction against exclusionary national policies. This paper is the first of its kind to analyse how digital technologies hinder the efficacy of sanctuary policies, in a way that may render them obsolete. Drawing on comparative evidence from the UK and Canada, it explores digitally-driven responses to the COVID-19 pandemic by different government levels (local, regional, national) and their impact on migrants rights. Findings reveal that the increasing interoperability among population databases have crucially enhanced the capacity of immigration authorities to access sensitive data collected by local service providers, which can then be used to detect, detain, and deport precaritised migrants. Such practices of hostile data-sharing thus weaken pre-existing sanctuary protections that are based on limited cooperation among local and national officials. Yet, local actors have sometimes deployed fresh counter-strategies, notably building non-interoperable data management infrastructures so as to ensure safer access to basic healthcare services. While prior scholarship has mostly examined the role of digitisation in external bordering processes, this paper adds to the academic debate to the domain of internal borders.
The further embedding of immigration checks into UK public sector institutions have made them key sites of bordering. Information systems for datafication, established to enable the reporting and sharing data between these institutions and the UK Home Office, have become emerging sites for opposition to the UK’s border and immigration regime. In this paper, I will highlight the ways in which ‘everyday borderworkers’ in hospitals and higher education have practiced forms of refusal that have undermined these information systems and made care and education accessible to patients and students. However, such ‘data activism’ and the ‘un/bordering’ it enables is under threat from the expansion of machine learning into state bordering practices and processes or what Louise Amoore has called ‘the deep border’. Just as for some states almost every mundane space is becoming a potential site of bordering, so too computer science appears to be rendering all spaces as ‘feature spaces’. A feature is a set of attributes associated to an example and is generated by the examples the algorithm is exposed to. The algorithm still generates the feature, whether data is withheld or not. It uses the examples that are there. Clustering algorithms, Amoore argues, not only becomes a way for imagining and grouping people, places and even countries but also for inferring the behaviours and attributes of this group. I will argue that the expansion of the deep border into bordering public sector institutions will render current forms of data activism to deborder these institutions obsolete.
This article introduces the notion of kinship surveillance as the unilateral production of knowledge about familial relationships of migrants, undertaken and weaponised by the state to enact border regimes. I ask why and how knowledge about migrants’ kinship relations has been rendered a relevant scale of border control, and how it has historically been enacted through different media technologies. The article’s aim is to expose the historical cultural work that legitimises a technopolitics of weaponisation around kinship: rendering an enunciation of “family” as biological and genetic into a means of enacting border regimes. In particular, the article unpicks how fears of fraud and deception, and fears of being “too slow” and “overwhelmed”, structure the ways kinship gets technologically reduced to information points that can be extracted, stored, surveilled, and used in complicity with border regimes. In doing so, the paper draws on archival material around the introduction of “DNA fingerprinting” by the UK Home Office during the 1980s, as well as on the case of blood group testing of Chinese immigrants employed by the USA in the 1950s. At a moment of rampant digitalisation and automation of evermore clamped-down border regimes, I argue that historicizing the technopolitics of kinship surveillance decentres innovationist hypes around “smart” border technologies and challenges the naturalised epistemic authority and weaponisation of knowing and surveilling migrants’ familial relations.
Image-based sexual abuse (IBSA), including the non-consensual distribution of intimate images (NCII), is an exponentially growing issue. Private online reporting and removal tools, such as the Take It Down service run by the NCMEC, can empower victim-survivors, especially young people, who experience threats of the sharing of their intimate images. By pre-emptively reporting images, users could ideally block them from ever being posted on multiple major online platforms with one report, taking power away from perpetrators of sextortion and protecting against the reuploading of known IBSA content.
These services rely on sharing “perceptual hash values” (like digital fingerprints) of images with online platforms in order to match IBSA content without sharing the images/videos themselves. However, our research shows that generative AI attacks using consumer-grade hardware can be used to approximately reconstruct images from their hash value, known as “hash inversion”. This indicates that the hash values should be treated as carefully as the original images, otherwise vulnerable users’ privacy may be put at risk, for example if perceptual hash values of reported intimate images became public as the result of a data breach.
To mitigate this attack, we propose implementing Private Set Intersection (PSI) as an additional layer of protection, to enhance the security and privacy for users whilst maintaining the functionality required to detect and remove IBSA. We highlight the future potential for private pre-emptive reporting to combat sextortion threats, and the need for user-focused design and greater transparency in IBSA reporting and removal tools.
The Protection of Children Act, 1978 (PCA) is widely considered the definitive piece of legislation with regards to youth sexual image sharing. It states that it is an offence to make, possess or distribute indecent images of anyone under 18 and was designed to respond to cases where adults sexually abused children and filmed or photographed that abuse. As youth sexual image sharing has become increasingly normalised, many have called for the legislation to be changed or updated to prevent the over-criminalisation of young people. Another issue with the PCA is its influence on education policy, promoting the prevention / prohibition approach. I will explore how this prohibition message does not in fact prevent young people being victimised by adults but instead serves to reinforce the threats and control tactics used by groomers who coerce young people to take and share sexual images, therefore the PCA is no longer compatible with children’s rights. I will discuss how the Online Safety Act’s (2023) development of non-consensual image sharing offences may offer an alternative approach. This approach could be used to foreground young people’s consent, whilst also providing opportunities to share details of support services and how to remove images that have been shared to social media and pornography sites (such as takeitdown.ncmec.org), which a straightforward prevention message cannot easily achieve. I will conclude by showing how this approach is more compatible with children’s rights and can challenge rather than reinforce the tactics used by groomers.
There is notably growth in the use of deepfake technology to create fake, yet indistinguishable from real life, sexual images and videos of others without their consent. Though there is an emerging understanding of the impact to which this has on it’s targets, the individuals from which this information comes from is almost entirely those whose facial likeness has been used within the media, with little attention paid to those whose bodies have been used as the canvas. Across 321 participants (Mage = 45.70 years, SD = 15.88; 48.9% female), we explored societal judgements of survivors whose face and/or body likeness had been used to create sexualized videos via a vignette design, which also took into account whether said survivors where sex workers or not. Though perceived criminality did not differ across our conditions, participants allocated more blame and less anticipated impact to the body target, relative to the face target, especially if they were noted in the vignette to be a sex worker. Moreover, when accounting for personality traits, beliefs, and demographics, being male and viewing sex work as ‘a choice’ and/or ‘deviant’ predicted greater victim-blame, lower perceived criminality of deepfaking, and lower anticipated harm, with increased empathy being the only predictor of higher anticipated harm. Results suggest a need to understand the broader impacts of sexualized deepfake abuse for both facial and body targets, and continue to generate public awareness of the impact this form of image-based sexual abuse can have on its survivors.