Exocytosis is a complex process involving the regulated release of neurotransmitters from presynaptic neurons, and precise control of this process is crucial for neurotransmission. Synapsin and the SNARE (Soluble NSF Attachment Protein Receptor) complex are proteins that play significant roles in regulating exocytosis. Studies have demonstrated that synapsin modulates vesicle release by controlling the movement of vesicles to the active zone, where the SNARE complex facilitates vesicle fusion with the presynaptic membrane. Despite synapsin being the most abundant protein in neurons and both proteins interacting with synaptic vesicles, the role of synapsin in modulating SNARE dynamics remains unclear. In this investigation, we employed magnetic tweezers to probe the interaction between synapsin and the SNARE complex. By exerting controlled forces on individual SNARE complexes in the presence of synapsin, we observed that synapsin can impact the mechanical properties of SNARE, implying a potential role for synapsin in regulating neurotransmitter release through its effects on SNARE dynamics. These findings emphasize the importance of exploring synaspin-SNARE interactions in the nervous system and offer fresh insights into the role of synapsin in neuronal function.

The trans-activating CRISPR RNA (tracrRNA) is fundamental to the CRISPR/Cas9 system, forming guide RNA with crRNA. Despite its known importance in crRNA maturation and Cas9 RNP-mediated DNA cleavage, the exact function of tracrRNA scaffolds remains unclear. In this investigation, we generated five tracrRNA variants by removing specific scaffolds, including Stem loops 1, 2, and 3, and the Linker. Using a new single-molecule assay, we directly observed target binding and cleavage processes guided by Cas9 RNP. Our findings underscore the vital role of the Linker in initiating R-loops and highlight the significance of Stem loop 2 in identifying PAM-distal mismatches within target DNA. Furthermore, we explored cleavage efficiency by adding tracrRNA segments, indicating that maintaining the integrity of Stem loops 2 and 3 is crucial for potent Cas9 activity. We believe that these results deepen our understanding of Cas9 functionality and offer insights into its detailed mechanism from target binding to cleavage.

Audio-to-talking face generation stands at the forefront of advancements in generative AI. It bridges the gap between audio and visual representations by generating synchronized and realistic talking faces. This significantly improves human-computer interaction and content accessibility for diverse audiences. Despite substantial research in this area, critical challenges such as the lack of realistic facial animations, inaccurate audio-lip synchronization, and intensive computational demands continue to restrict the practicality of the talking face generation methods applications. To address these issues, we introduce a novel approach leveraging the emerging capabilities of Stable diffusion models and vision Transformers for Talking face generation (StableTalk). By incorporating the Re-attention mechanism and adversarial loss into StableTalk, we have markedly enhanced the audio-lip alignment and the consistency of facial animations across frames. More importantly, we have optimized computational efficiency by refining operations within the latent space and dynamically adjusting the visual focus based on the given conditions. Our experimental results demonstrate that StableTalk surpasses existing methods in terms of image quality, audio-lip synchronization, and computational efficiency.

The eukaryotic cell cycle, a pivotal biological process, has been extensively studied and
mathematically modelled in recent decades. Despite concerted efforts, identifying the minimal gene set essential for orderly cell cycle progression remains elusive. Synthetic biology, renowned for genetic engineering applications, also provides a pathway for addressing fundamental biological queries through “learning from building.” The Synthetic Yeast Genome (Sc2.0) project exemplifies this by synthesising Saccharomyces cerevisiae’s genome with changes that advance our understanding of eukaryotic genomes.
Expanding from Sc2.0’s groundwork, we aim to pioneer synthetic yeast genomes that are
minimal, modular, and reprogrammable. As a proof-of-concept, we constructed a synthetic
genome module housing nine of the key cell cycle genes. Employing CRISPR, we
systematically deleted these genes from their native loci and reinserted them together as a
synthetic gene cluster. While individually non-essential, the combined absence of all nine
genes renders this synthetic module indispensable.
Through Cre/loxP-mediated recombination, we investigated the gene combinations necessary for yeast cell cycle progression. Cre recombinase facilitated targeted gene deletions between intergenic loxP sites within the module, and rapidly generated diverse strains with combinatorial cluster deletion profiles, covering all potential combinations. Using flow cytometry sorting, we developed a way to isolate hundreds of viable deletion combinations and developed the Pool of Long Amplified Reads (POLAR) sequencing technique to enable the analysis of gene deletion frequency and gene content combinations for hundreds of strains with different cell cycle modules. These experimental findings were compared to computational models of the cell cycle and get us closer to understanding the minimal gene content for this function.
Upon pioneering this work, we now envisage a future where genome designers can predict
gene sets necessary for specialised tasks and can then synthetically arrange these genes on chromosomes and design intergenic regions to regulate their gene expression appropriately.

International commerce is a sphere where well-built customs rules are crucial. Nevertheless, due to existence of illegal acts and fraudulent undertakings, there is an urge for safety and economic soundness in customs controls. India’s customs service and related organizations employ artificial intelligence-based technologies that aid in combating illegal trade globally. The paper examines how AI can be used to identify people who misuse technology for illicit imports or exports. These evaluations also demonstrate how border control has become more dependent on AI, identify major concerns, and predict future trends. AI may provide an opportunity to strengthen border security as well as expedite legal business relations.

People quickly recognise human actions carried out in everyday activities. There is evidence that Minimal Recognisable Configurations (MIRCs) contain a combination of spatial and temporal visual features critical for reliable recognition. For complex activities, observers may have different descriptions varied in their semantic similarity (e.g., washing dishes vs cleaning dishes), potentially complicating the investigation of MIRCs in action recognition. Therefore, we measured the semantic consistency for 128 short videos of complex actions from the Epic-Kitchens-100 dataset (Damen et al., 2022), selected based on poor classification performance by our state-of-the-art computer vision network MOFO (Ahmadian et al., 2023). In an online experiment, participants viewed each video and identified the performed action by typing a description using 2-3 words (capturing action and object). Each video was classified by at least 30 participants (N=76 total). Semantic consistency of the responses was determined using a custom pipeline involving the sentence-BERT language model, which generated embedding vectors representing semantic properties of the responses. We then used adjusted pair-wise cosine similarities between response vectors to compute a ground truth description for each video, a response with the greatest semantic neighbourhood density (e.g., pouring oil, closing shelf). The greater the semantic neighbourhood density was for a ground truth candidate, the more semantically consistent were responses for the associated video. We uncovered 87 videos where semantic consistency confirmed their reliable recognisability, i.e. where cosine-similarity between the ground truth candidate and at least 70% of responses was above a similarity threshold of 0.65. We will use a subsample of these videos to investigate the role of MIRCs in human action recognition, e.g., gradually degrading the spatial and temporal information in videos and measuring the impact on action recognition. The derived semantic space and MIRCs will be used to revise MOFO into a more biologically consistent and better performing model.

Electrochemical potentials are essential for cellular life. For instance, cells generate and harness electrochemical gradients to drive a myriad of fundamental processes from nutrient uptake and ATP synthesis to neuronal transduction. To generate and maintain these gradients, all cellular membranes carefully regulate ionic fluxes using a broad array of transport proteins. For that reason, it is also extremely difficult to untangle specific ion transport pathways and link them to membrane potential variations in live cell studies. Conversely, synthetic membrane models, such as black lipid membranes and liposomes, are free of the structural complexity of cells and thus enable to isolate particular ion transport mechanisms and study them under tightly controlled conditions. Still, there is a lack of quantitative methods for correlating ionic fluxes to electrochemical gradient buildup in membrane models. Consequently, the use of these models as a tool for unravelling the coupling between ion transport and electrochemical gradients is limited. We developed a fluorescence-based approach for resolving the dynamic variation of membrane potential in response to ionic flux across giant unilamellar vesicles (GUVs). To gain maximal control over the size and membrane composition of these micron-sized liposomes, we developed an integrated microfluidic platform that is capable of high-throughput production and purification of monodispersed GUVs. By combining our microfluidic platform with quantitative fluorescence analysis, we determined the permeation rate of two biologically important electrolytes – protons (H+) and potassium ions (K+) – and were able to correlate their flux with electrochemical gradient accumulation across the lipid bilayer of single GUVs. Through applying similar analysis principles, we also determined the permeation rate of K+ across two archetypal ion channels, gramicidin A and outer membrane porin F (OmpF). We then showed that the translocation rate of H+ across gramicidin A is four orders of magnitude higher than that of K+ unlike in the case of OmpF where similar transport rates were evaluated for both ions.

This research represents a groundbreaking approach in plant phenotyping by harnessing 3D point clouds generated from video data. Focusing on the comprehensive characterization of plant traits, this method enhances the precision and depth of phenotypic analysis, crucial for advancements in genetics, breeding, and agricultural practices.

Advanced Video Data Capture and Processing for Detailed Segmentation

High-Fidelity Video Acquisition: Capturing detailed video footage of plants under varying environmental conditions forms the foundation of this method. The use of high-resolution cameras allows for capturing minute details crucial for accurate part segmentation.

Rigorous Preprocessing for Optimal Data Quality: Following capture, the video data undergoes meticulous preprocessing. Stabilization, noise filtering, and color correction are performed to ensure that the subsequent segmentation algorithms can accurately identify different parts of the plant.

Segmentation and 3D Point Cloud Generation: The application of state-of-the-art image processing algorithms segments the plant parts within each video frame. Subsequently, photogrammetry and depth estimation techniques create detailed 3D point clouds, effectively capturing the geometry of individual plant components.

Part Segmentation and Trait Measurement for Enhanced Phenotyping

Precise Plant Part Segmentation: This methodology enables the accurate segmentation of individual plant parts, such as leaves, stems, and flowers, within the 3D space. This precise segmentation is crucial for assessing complex plant traits and understanding plant structure in its entirety.

Comprehensive Trait Measurement: The 3D point clouds facilitate comprehensive measurements of plant traits. This includes quantifying leaf area, stem thickness, flower size, and even more subtle features like leaf venation patterns, providing a multi-dimensional view of plant phenotypic traits.

Temporal Tracking for Dynamic Trait Analysis: An integral advantage of using video data is the ability to track and measure these traits over time. This dynamic analysis allows for monitoring growth patterns, developmental changes, and responses to environmental stimuli in a way that static images cannot achieve.

Conclusion: A Breakthrough in Plant Phenotyping and Agricultural Research
This research significantly enhances the capability for detailed plant part segmentation and trait measurement, setting a new standard in plant phenotyping. The level of detail and accuracy afforded by this method offers invaluable insights for agricultural technology, plant genetics, and breeding programs. It represents a critical step forward in our ability to understand and optimize plant characteristics, with far-reaching implications for food production and ecological sustainability.

Our rich, embodied visual experiences of the world involve integrating information from multiple sensory modalities – yet how the brain brings together multiple sensory reference frames to generate such experiences remains unclear. Recently, it has been demonstrated that BOLD fluctuations throughout the brain can be explained as a function of the activation pattern on the primary visual cortex (V1) topographic map. This class of ‘connective field’ models allow us to project V1’s map of visual space into the rest of the brain and discover previously unknown visual organization. Here, we extend this powerful principle to incorporate both visual and somatosensory topographies by explaining BOLD responses during naturalistic movie-watching as a function of two spatial patterns (Connective fields) on the surfaces of V1 and S1. We show that responses in the higher levels of the visual hierarchy are characterized by multimodal topographic connectivity: these responses can be explained as a function of spatially specific activation patterns on both the retinotopic and somatosensory homunculus topographies, indicating that somatosensory cortex participates in naturalistic vision. These novel multimodal tuning profiles are in line with known visual category selectivity, for example for faces and manipulable objects. Our findings demonstrate a scale and granularity of multisensory tuning far more extensive than previously assumed. When inspecting their topographic tuning in S1, we find a full band extrastriate visual cortex from retrosplenial, laterally to the fusiform gyrus, is tiled with somatosensory homunculi. These results demonstrate the intimate integration of information about visual coordinates and body parts in the brain that likely supports visually guided movements and our rich, embodied experience of the world. Finally, we present initial data from a new, densely sampled 7T fMRI movie-watching dataset optimised to shed light on the brain basis of human action understanding.

We do not notice everything in front of us, due to our limited attention capacity. What we attend to forms our conscious experience and is what we retain over time. Thus, creative content creators must strive to direct your attention in different media, from cinema to computer games. To do this they have developed various techniques that involve either directly using centrally presented cues such as arrows or instructions to move attention or rely on image features or so- called “bottom-up” cues that involve manipulating the salience of the parts of an image. Shifting attention usually involves moving our central vision around a screen, but this problem becomes more pronounced in virtual environments where users are free to explore by moving in any direction through it. This can be seen in first- person view screen- based computer video games. Such an experience allows the user to choose how they sample their environment. Often the designer of the environment wishes the user to interact and view certain parts of the scene. In this study we test out a subtle manipulation of visual attention through varying depth of field. Varying depth of field is a cinematic technique that can be implemented in virtual worlds and involves keeping parts of the scene in focus whilst blurring other parts of the scene. We use eye tracking to investigate this technique in a 3D game environment, rendered on a monitor screen. Participants navigated through the environment using keyboard keys and began by freely exploring in the first part and in the second part were instructed to find a target object. We manipulated whether the frames were rendered fully in focus (termed a deep depth of field) or whether a shallow depth of field was applied (where the outer edges of the scene appear blurred. We measured where on the screen participants looked. We divided the screen into 3×3 equal sized regions and calculated the proportion of the time participants spent looking in the central square. On average across all trials participants spent 67% of their fixation time on the central area of the screen. This means that they preferred to navigate by looking in the direction they were heading in. We found that there was a significant difference when freely exploring the scene – participants spent more time looking in the centre of the screen when a shallow depth of field was applied than with a deep depth of field. This was no longer the case during the search task. We demonstrate how these techniques might be effective for manipulating attention by keeping user’s eyes looking straight ahead when they are freely exploring a virtual environment.