Jun 2025 • Journal of Computer and System Sciences 148, 103588, 2025
Amotz Bar-Noy, Toni Böhnlein, David Peleg, Yingli Ran, Dror Rawitz
We study the question of whether a sequence of positive integers is the degree sequence of some outerplanar (a.k.a. 1-page book embeddable) graph G. If so, G is an outerplanar realization of d and d is an outerplanaric sequence. The case where is easy, as d has a realization by a forest (which is trivially an outerplanar graph). In this paper, we consider the family of all sequences d of even sum , where is the number of x’s in d. (The second inequality is a necessary condition for a sequence d with to be outerplanaric.) We partition into two disjoint subfamilies, , such that every sequence in is provably non-outerplanaric, and every sequence in is given a realizing graph G enjoying a 2-page book embedding (and moreover, one of the pages is also bipartite).
Show moreJan 2025 • arXiv preprint arXiv:2401.01650
Idit Diamant, Amir Rosenfeld, Idan Achituve, Jacob Goldberger, Arnon Netzer
Source-free domain adaptation (SFDA) aims to adapt a source-trained model to an unlabeled target domain without access to the source data. SFDA has attracted growing attention in recent years, where existing approaches focus on self-training that usually includes pseudo-labeling techniques. In this paper, we introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings and learn to de-confuse the pseudo-labels. More specifically, we learn a noise transition matrix of the pseudo-labels to capture the label corruption of each class and learn the underlying true label distribution. Estimating the noise transition matrix enables a better true class-posterior estimation, resulting in better prediction accuracy. We demonstrate the effectiveness of our approach when combined with several SFDA methods: SHOT, SHOT++, and AaD. We obtain state-of-the-art results on three domain adaptation datasets: VisDA, DomainNet, and OfficeHome.
Show more2025 • International Conference on the Theory and Application of Cryptology and …, 2025
Carmit Hazay, David Heath, Vladimir Kolesnikov, Muthuramakrishnan Venkitasubramaniam, Yibin Yang
In the Zero-Knowledge Proof (ZKP) of a disjunctive statement, and agree on B fan-in 2 circuits over a field; each circuit has inputs, multiplications, and one output.’s goal is to demonstrate the knowledge of a witness,, st where neither nor is revealed. Disjunctive statements are effective, for example, in implementing ZKP based on sequential execution of CPU steps.
Show more2025 • Cryptology ePrint Archive
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam, Mor Weiss
Leakage-resilient cryptography aims to protect cryptographic primitives from so-called" side channel attacks" that exploit their physical implementation to learn their input or secret state. Starting from the works of Ishai, Sahai and Wagner (CRYPTO03) and Micali and Reyzin (TCC04), most works on leakage-resilient cryptography either focus on protecting general computations, such as circuits or multiparty computation protocols, or on specific non-interactive primitives such as storage, encryption and signatures. This work focuses on leakage-resilience for the middle ground, namely for distributed and interactive cryptographic primitives. Our main technical contribution is designing the first secret-sharing scheme that is equivocal, resists adaptive probing of a constant fraction of bits from each share, while incurring only a constant blowup in share size. Equivocation is a strong leakage-resilience guarantee, recently introduced by Hazay et al.(ITC21). Our construction is obtained via a general compiler which we introduce, that transforms any secret-sharing scheme into an equivocal scheme against adaptive leakage. An attractive feature of our compiler is that it respects additive reconstruction, namely, if the original scheme has additive reconstruction, then the transformed scheme has linear reconstruction. We extend our compiler to a general paradigm for protecting distributed primitives against leakage, and show its applicability to various primitives, including secret sharing, verifiable secret sharing, function secret sharing, distributed encryption and signatures, and distributed zero-knowledge proofs. For each of these primitives, our paradigm …
Show moreDec 2024 • arXiv e-prints
Ram Dyuthi Sristi, Ofir Lindenbaum, Maria Lavzin, Jackie Schiller, Gal Mishne, Hadas Benisty
We study the problem of contextual feature selection, where the goal is to learn a predictive function while identifying subsets of informative features conditioned on specific contexts. Towards this goal, we generalize the recently proposed stochastic gates (STG) Yamada et al.[2020] by modeling the probabilistic gates as conditional Bernoulli variables whose parameters are predicted based on the contextual variables. Our new scheme, termed conditional-STG (c-STG), comprises two networks: a hypernetwork that establishes the mapping between contextual variables and probabilistic feature selection parameters and a prediction network that maps the selected feature to the response variable. Training the two networks simultaneously ensures the comprehensive incorporation of context and feature selection within a unified model. We provide a theoretical analysis to examine several properties of the proposed …
Show moreDec 2024 • ACS Omega
Tal Raviv, Zeev Kalyuzhner, Zeev Zalevsky
In recent years, there has been growing interest in optical data processing, driven by the demand for high-speed and high-bandwidth data handling in data centers. One of the key milestones for enabling effective all-optical data processing systems is the development of efficient optical memory. Previously, we introduced a novel approach for establishing nonvolatile optical memory, based on the classification of scattering fields generated by gold nanoparticles. In this ongoing research, we apply advanced machine learning techniques to enhance the performance of the proposed nonvolatile memory element. By utilizing Random Forest and t-SNE algorithms, we successfully classified and analyzed the scattered images obtained from the optical memory device. The classification model presented in this study achieved an accuracy and average F1-score of 0.81 across nine distinct classes.
Show moreDec 2024 • Intelligent Systems with Applications
Ohad Volk, Gonen Singer
We design an adaptive learning algorithm for binary classification problems whose objective is to reduce the cost of misclassified instances derived from the consequences of errors. Our algorithm (Adaptive Cost-Sensitive Learning — AdaCSL) adaptively adjusts the loss function to bridge the difference between the class distributions between subgroups of samples in the training and validation data sets. This adjustment is made for samples with similar predicted probabilities, in such a way that the local cost decreases. This process usually leads to a reduction in cost when applied to the test data set (i.e., local training–test class distributions mismatch). We present empirical evidence that neural networks used with the proposed algorithm yields better cost results on several data sets compared to other approaches. In addition, the proposed AdaCSL algorithm can optimize evaluation metrics other than cost. We …
Show moreDec 2024 • arXiv preprint arXiv:2412.12951
Jonathan Svirsky, Yehonathan Refael, Ofir Lindenbaum
Large Language Models (LLMs), with billions of parameters, present significant challenges for full finetuning due to the high computational demands, memory requirements, and impracticality of many real-world applications. When faced with limited computational resources or small datasets, updating all model parameters can often result in overfitting. To address this, lightweight finetuning techniques have been proposed, like learning low-rank adapter layers. These methods aim to train only a few additional parameters combined with the base model, which remains frozen, reducing resource usage and mitigating overfitting risks. In this work, we propose an adaptor model based on stochastic gates that simultaneously sparsify the frozen base model with task-specific adaptation. Our method comes with a small number of trainable parameters and allows us to speed up the base model inference with competitive accuracy. We evaluate it in additional variants by equipping it with additional low-rank parameters and comparing it to several recent baselines. Our results show that the proposed method improves the finetuned model accuracy comparatively to the several baselines and allows the removal of up to 20-40\% without significant accuracy loss.
Show moreDec 2024 • arXiv preprint arXiv:2312.02102
Or Shalom, Amir Leshem, Waheed U Bajwa
Federated learning is a technique that allows multiple entities to collaboratively train models using their data without compromising data privacy. However, despite its advantages, federated learning can be susceptible to false data injection attacks. In these scenarios, a malicious entity with control over specific agents in the network can manipulate the learning process, leading to a suboptimal model. Consequently, addressing these data injection attacks presents a significant research challenge in federated learning systems. In this paper, we propose a novel technique to detect and mitigate data injection attacks on federated learning systems. Our mitigation method is a local scheme, performed during a single instance of training by the coordinating node, allowing the mitigation during the convergence of the algorithm. Whenever an agent is suspected to be an attacker, its data will be ignored for a certain period, this decision will often be re-evaluated. We prove that with probability 1, after a finite time, all attackers will be ignored while the probability of ignoring a trustful agent becomes 0, provided that there is a majority of truthful agents. Simulations show that when the coordinating node detects and isolates all the attackers, the model recovers and converges to the truthful model.
Show moreDec 2024 • arXiv preprint arXiv:2312.13240
Amit Rozner, Barak Battash, Ofir Lindenbaum, Lior Wolf
We study the problem of performing face verification with an efficient neural model . The efficiency of stems from simplifying the face verification problem from an embedding nearest neighbor search into a binary problem; each user has its own neural network . To allow information sharing between different individuals in the training set, we do not train directly but instead generate the model weights using a hypernetwork . This leads to the generation of a compact personalized model for face identification that can be deployed on edge devices. Key to the method's success is a novel way of generating hard negatives and carefully scheduling the training objectives. Our model leads to a substantially small requiring only 23k parameters and 5M floating point operations (FLOPS). We use six face verification datasets to demonstrate that our method is on par or better than state-of-the-art models, with a significantly reduced number of parameters and computational burden. Furthermore, we perform an extensive ablation study to demonstrate the importance of each element in our method.
Show moreDec 2024 • IEEE Control Systems Letters
Luca Ballotta, Áron Vèkássy, Stephanie Gil, Michal Yemini
This letter studies the Friedkin-Johnsen (FJ) model with diminishing competition, or stubbornness. The original FJ model assumes that each agent assigns a constant competition weight to its initial opinion. In contrast, we investigate the effect of diminishing competition on the convergence point and speed of the FJ dynamics. We prove that, if the competition is uniform across agents and vanishes asymptotically, the convergence point coincides with the nominal consensus reached with no competition. However, the diminishing competition slows down convergence according to its own rate of decay. We study this phenomenon analytically and provide upper and lower bounds on the convergence rate. Further, if competition is not uniform across agents, we show that the convergence point may not coincide with the nominal consensus point. Finally, we evaluate our analytical insights numerically.
Show moreDec 2024 • Optics and Lasers in Engineering 183, 108536, 2024
Kobi Aflalo, Peng Gao, Vismay Trivedi, Abhijit Sanjeev, Zeev Zalevsky
In this comprehensive review, we delve into super-resolution optical imaging techniques and their diverse applications. Our primary focus is on linear optics super-resolution methods, encompassing a wide array of concepts ranging from time multiplexing, ptychography, and deep learning-based microscopy to compressive sensing and random phase encoding techniques. Additionally, we explore compressed sensing, non-spatial resolution improvement, and sparsity-based geometric super-resolution. Furthermore, we investigate various methods based on field of view, wavelength, coherence, polarization, gray level, and code division multiplexing, as well as localization microscopy. Our review extends to stimulated emission depletion microscopy via pump-probe super-resolution techniques, providing a detailed analysis of their working applications. We then shift our attention to near-field scanning optical …
Show moreDec 2024 • Quantum Science and Technology
Rafael Wagner, Zohar Schwartzman-Nowik, Ismael Lucas Paiva, Amit Te'eni, Antonio Ruiz-Molero, Rui Soares Barbosa, Eliahu Cohen, Ernesto Galvão
Weak values and Kirkwood--Dirac (KD) quasiprobability distributions have been independently associated with both foundational issues in quantum theory and advantages in quantum metrology. We propose simple quantum circuits to measure weak values, KD distributions, and spectra of density matrices without the need for post-selection. This is achieved by measuring unitary-invariant, relational properties of quantum states, which are functions of Bargmann invariants, the concept that underpins our unified perspective. Our circuits also enable experimental implementation of various functions of KD distributions, such as out-of-time-ordered correlators (OTOCs) and the quantum Fisher information in post-selected parameter estimation, among others. An upshot is a unified view of nonclassicality in all those tasks. In particular, we discuss how negativity and imaginarity of Bargmann invariants relate to set coherence.
Show moreDec 2024 • IEEE Signal Processing Magazine 41 (4), 40-57, 2024
Tom Tirer, Raja Giryes, Se Young Chun, Yonina C Eldar
Deep learning in general focuses on training a neural network from large labeled datasets. Yet, in many cases there is value in training a network just from the input at hand. This may involve training a network from scratch using a single input or adapting an already trained network to a provided input example at inference time. This survey paper aims at covering deep internal-learning techniques that have been proposed in the past few years for these two important directions. While our main focus will be on image processing problems, most of the approaches that we survey are derived for general signals (vectors with recurring patterns that can be distinguished from noise) and are therefore applicable to other modalities. We believe that the topic of internal-learning is very important in many signal and image processing problems where training data is scarce and diversity is large on the one hand, and on the other, there is a lot of structure in the data that can be exploited.
Show moreNov 2024 • bioRxiv
Yaron Trink, Achia Urbach, Benjamin Dekel, Peter Hohenstein, Jacob Goldberger, Tomer Kalisky
The significant heterogeneity of Wilms’ tumors between different patients is thought to arise from genetic and epigenetic distortions that occur during various stages of fetal kidney development in a way that is poorly understood. To address this, we characterized the heterogeneity of alternative mRNA splicing in Wilms’ tumors using a publicly available RNAseq dataset of high-risk Wilms’ tumors and normal kidney samples. Through Pareto task inference and cell deconvolution, we found that the tumors and normal kidney samples are organized according to progressive stages of kidney development within a triangle-shaped region in latent space, whose vertices, or “archetypes,” resemble the cap mesenchyme, the nephrogenic stroma, and epithelial tubular structures of the fetal kidney. We identified a set of genes that are alternatively spliced between tumors located in different regions of latent space and found that many of these genes are associated with the Epithelial to Mesenchymal Transition (EMT) and muscle development. Using motif enrichment analysis, we identified putative splicing regulators, some of which are associated with kidney development. Our findings provide new insights into the etiology of Wilms’ tumors and suggest that specific splicing mechanisms in early stages of development may contribute to tumor development in different patients.
Show moreNov 2024 • Hardware Security Attack Landscape and Countermeasures
N Nalla Anandakumar, Marcus Janke, Johann Knechtel, Itamar Levi, Xiaojin Zhao, Kwen-Siong Chong, Selcuk Kose, Bah-Hwee Gwee, John Chang, Lo Paul, Yi Wang
An overview is given of attacks on hardware security by introducing the four attack classes: logical attacks, physical observative attacks, physical fault injection attacks, and physical manipulative attacks. Descriptions and examples are given for each class of attack. Additionally, typical countermeasures for products to prevent successful attacks are listed and explained. Thereby, an introduction to hardware security is given as well as information shared for advanced readers. A methodology is suggested for a rating scheme on the countermeasures to allow easier product selection for given use cases and applications. Finally, the definitions of related rating tables are identified.
Show moreNov 2024 • arXiv preprint arXiv:2411.12702
Yishai Klein, Edward Strizhevsky, Haim Aknin, Moshe Deutsch, Eliahu Cohen, Avi Pe'er, Kenji Tamasaku, Tobias Schulli, Ebrahim Karimi, Sharon Shwartz
The invention of X-ray interferometers has led to advanced phase-sensing devices that are invaluable in various applications. These include the precise measurement of universal constants, e.g. the Avogadro number, of lattice parameters of perfect crystals, and phase-contrast imaging, which resolves details that standard absorption imaging cannot capture. However, the sensitivity and robustness of conventional X-ray interferometers are constrained by factors, such as fabrication precision, beam quality, and, importantly, noise originating from external sources or the sample itself. In this work, we demonstrate a novel X-ray interferometric method of phase measurement with enhanced immunity to various types of noise, by extending, for the first time, the concept of the SU(1,1) interferometer into the X-ray regime. We use a monolithic silicon perfect crystal device with two thin lamellae to generate correlated photon pairs via spontaneous parametric down-conversion (SPDC). Arrival time coincidence and sum-energy filtration allow a high-precision separation of the correlated photon pairs, which carry the phase information from orders-of-magnitude larger uncorrelated photonic noise. The novel SPDC-based interferometric method presented here is anticipated to exhibit enhanced immunity to vibrations as well as to mechanical and photonic noise, compared to conventional X-ray interferometers. Therefore, this SU(1,1) X-ray interferometer should pave the way to unprecedented precision in phase measurements, with transformative implications for a wide range of applications.
Show moreNov 2024 • arXiv preprint arXiv:2411.10854
Adi Cohen, Daniel Wong, Jung-Suk Lee, Sharon Gannot
This paper introduces an explainable DNN-based beamformer with a postfilter (ExNet-BF+PF) for multichannel signal processing. Our approach combines the U-Net network with a beamformer structure to address this problem. The method involves a two-stage processing pipeline. In the first stage, time-invariant weights are applied to construct a multichannel spatial filter, namely a beamformer. In the second stage, a time-varying single-channel post-filter is applied at the beamformer output. Additionally, we incorporate an attention mechanism inspired by its successful application in noisy and reverberant environments to improve speech enhancement further. Furthermore, our study fills a gap in the existing literature by conducting a thorough spatial analysis of the network's performance. Specifically, we examine how the network utilizes spatial information during processing. This analysis yields valuable insights into the network's functionality, thereby enhancing our understanding of its overall performance. Experimental results demonstrate that our approach is not only straightforward to train but also yields superior results, obviating the necessity for prior knowledge of the speaker's activity.
Show moreNov 2024 • Nature Medicine
Johanna Klughammer, Daniel L Abravanel, Åsa Segerstolpe, Timothy R Blosser, Yury Goltsev, Yi Cui, Daniel R Goodwin, Anubhav Sinha, Orr Ashenberg, Michal Slyper, Sébastien Vigneau, Judit Jané‐Valbuena, Shahar Alon, Chiara Caraccio, Judy Chen, Ofir Cohen, Nicole Cullen, Laura K DelloStritto, Danielle Dionne, Janet Files, Allison Frangieh, Karla Helvie, Melissa E Hughes, Stephanie Inga, Abhay Kanodia, Ana Lako, Colin MacKichan, Simon Mages, Noa Moriel, Evan Murray, Sara Napolitano, Kyleen Nguyen, Mor Nitzan, Rebecca Ortiz, Miraj Patel, Kathleen L Pfaff, Caroline Porter, Asaf Rotem, Sarah Strauss, Robert Strasser, Aaron R Thorner, Madison Turner, Isaac Wakiro, Julia Waldman, Jingyi Wu, Jorge Gómez Tejeda Zañudo, Diane Zhang, Nancy U Lin, Sara M Tolaney, Eric P Winer, Edward S Boyden, Fei Chen, Garry P Nolan, Scott J Rodig, Xiaowei Zhuang, Orit Rozenblatt-Rosen, Bruce E Johnson, Aviv Regev, Nikhil Wagle
Although metastatic disease is the leading cause of cancer-related deaths, its tumor microenvironment remains poorly characterized due to technical and biospecimen limitations. In this study, we assembled a multi-modal spatial and cellular map of 67 tumor biopsies from 60 patients with metastatic breast cancer across diverse clinicopathological features and nine anatomic sites with detailed clinical annotations. We combined single-cell or single-nucleus RNA sequencing for all biopsies with a panel of four spatial expression assays (Slide-seq, MERFISH, ExSeq and CODEX) and H&E staining of consecutive serial sections from up to 15 of these biopsies. We leveraged the coupled measurements to provide reference points for the utility and integration of different experimental techniques and used them to assess variability in cell type composition and expression as well as emerging spatial expression …
Show moreNov 2024 • arXiv preprint arXiv:2411.03832
Rotem Ben-Hur, Orian Leitersdorf, Ronny Ronen, Lidor Goldshmidt, Idan Magram, Lior Kaplun, Leonid Yavitz, Shahar Kvatinsky
Genome analysis has revolutionized fields such as personalized medicine and forensics. Modern sequencing machines generate vast amounts of fragmented strings of genome data called reads. The alignment of these reads into a complete DNA sequence of an organism (the read mapping process) requires extensive data transfer between processing units and memory, leading to execution bottlenecks. Prior studies have primarily focused on accelerating specific stages of the read-mapping task. Conversely, this paper introduces a holistic framework called DART-PIM that accelerates the entire read-mapping process. DART-PIM facilitates digital processing-in-memory (PIM) for an end-to-end acceleration of the entire read-mapping process, from indexing using a unique data organization schema to filtering and read alignment with an optimized Wagner Fischer algorithm. A comprehensive performance evaluation with real genomic data shows that DART-PIM achieves a 5.7x and 257x improvement in throughput and a 92x and 27x energy efficiency enhancement compared to state-of-the-art GPU and PIM implementations, respectively.
Show moreNov 2024 • Investigative Ophthalmology & Visual Science
Basel Obied, Stephen Richard, Alon Zahavi, Hila Kreizman-Shefer, Jacob Bajar, Dror Fixler, Matea Krmpotić, Olga Girshevitz, Nitza Goldenberg-Cohen