Corporate Tax: We Tried the Stick, How About the Carrot?
Doron Narotzki
Abstract
Due to corporations’ on-going focus on tax planning and their continuous efforts finding new tax minimization strategies, multinational corporations are not paying their “fair share” in taxes for a long time now, and as a result the corporate tax law is unable to generate much tax revenue. This is a global phenomenon, and by no means only a U.S. domestic problem. Governments response to this problem has always been the same and almost single dimensioned: introducing new tax laws and regulations, revising old tax laws in order to shut-down the so-called “loop-holes” and hoping this will put an end to the problem of corporate tax evasion. For decades this approach has failed us. This paper examines the history of the corporate tax in the U.S, and the corporate tax avoidance industry which emerged into a global problem in the last 50 years, and finally suggests a new policy that aims to create an incentive for corporations to shift their focus and efforts away from abusive tax planning, and into investments in the economy. In the heart of this new policy is the recent Pillar 2 and the Global Minimum Corporate Tax that can and should be used in order to expand international cooperation between countries and as a tool for minimizing tax evasion. Pillar 2 is a truly historic moment for international and corporate tax, for the first time ever it helped set and define the corporate tax at a rate that is acceptable by (at least) 136 countries, and now is the time for countries to adopt a new way of thinking: offer this rate as the carrot for corporations who choose to act in certain ways, while still keeping the stick: the current (higher) corporate tax rate for those who choose not to adopt the new policy.
Test
Testing form.
MCMC methods are a great tool for fitting data because they explore the whole parameter space and, more importantly, they are able to deliver uncertainties. Uncertainties are crucial because they show how reliable the fit is. When the fit looks reasonable and the uncertainties are not very high, you can claim that you were able to describe your data successfully. However, what happens when your fits do not look so good; either because the values are unrealistic in the context of the system you are studying, or because the uncertainties are too high for the fits to be reliable? Is it the data or the approach to fitting the data the cause of failing?
In this talk, I will talk about my personal experience using MCMC methods during the course of my PhD. Firstly, I will show through different examples how the physical interpretation of the fitted values and their uncertainties was crucial in solving problems in my research. Sometimes, failing to fit the data –especially due to high uncertainties– led me to new insights about system I was struggling to understand, even taking that project into a whole new direction. Secondly, I will talk about situations where I still struggle fit the data, and I will share my insights as to why.
Nowadays, the volume of science or engineering data has increased substantially, and a variety of models have been developed to help to understand the observations. Markov chain Monte Carlo (MCMC) has been established as the standard procedure of inferring these model parameters subject to the available data in a Bayesian framework. Real systems such as interacting galaxies require complex models and these models are computationally prohibitive. The goal of this project is to provide a flexible platform for connecting a range of efficient algorithms for any user-defined circumstances. It will also serve as a testbed for assessing new state-of-the-art model-fitting algorithms.
The most commonly used MCMC methods are variants of the Metropolis-Hastings (MH) algorithm. At the beginning of this project and in this article, the standard MH-MCMC algorithm together with affine-invariant ensemble MCMC, which has dominated astronomical analysis over the past few decades, has been tested to reveal the performance of each sampler for the problems with known solutions. The Hamiltonian Monte Carlo algorithm was also tested and it shows in which circumstance that it outperforms the other two.
In my talk, I will start by introducing the basics of Markov Chain Monte Carlo (MCMC) methods. I will start with the Metropolis and Gibbs samplers, and the proceed to the Hamiltonian Monte Carlo sampler. A key focus will be to describe the domains of applicability of each of these sampling methods and the difficulties they encounter when applied. I will then describe strategies to overcome some of the difficulties encountered with basic versions of these methods. I will also touch on output diagnostics, and determining when a sampler is working as desired. Finally, I will consider the case of sampling where the likelihood of a model is expensive to compute and how MCMC can be used in this situation. The latter case may be of interest in astronomy applications.
Various multiobjective optimization algorithms have been proposed with a common assumption that the evaluation of each objective function takes the same period of time. Little attention has been paid to more general and realistic optimization scenarios where different objectives are evaluated by different computer simulations or physical experiments with different time complexities (latencies) and only a very limited number of function evaluations is allowed for the slow objective. In this work, we investigate benchmark scenarios with two objectives. We propose a transfer learning scheme within a surrogate-assisted evolutionary algorithm framework to augment the training data for the surrogate for the slow objective function by transferring knowledge from the fast one. Specifically, a hybrid domain adaptation method aligning the second-order statistics and marginal distributions across domains is introduced to generate promising samples in the decision space according to the search experience of the fast one. A Gaussian process model based co-training method is adopted to predict the value of the slow objective and those having a high confidence level are selected as the augmented synthetic training data, thereby enhancing the approximation quality of the surrogate of the slow objective. Our experimental results demonstrate that the proposed algorithm outperforms existing surrogate and non-surrogate-assisted delay-handling methods on a range of bi-objective optimization problems. The approach is also more robust to varying levels of latency and correlation between the objectives.
Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning. However, Slice Sampling’s performance is highly sensitive to the user-specified initial length scale hyperparameter and the method generally struggles with poorly scaled or strongly correlated distributions. To this end, we introduce Ensemble Slice Sampling (ESS) and its Python implementation, zeus, a new class of algorithms that bypasses such difficulties by adaptively tuning the initial length scale and utilising an ensemble of parallel walkers in order to efficiently handle strong correlations between parameters. These affine-invariant algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments. Empirical tests show that Ensemble Slice Sampling can improve efficiency by more than an order of magnitude compared to conventional MCMC methods on a broad range of highly correlated target distributions. In cases of strongly multimodal target distributions, Ensemble Slice Sampling can sample efficiently even in high dimensions. We argue that the parallel, black-box and gradient-free nature of the method renders it ideal for use in scientific fields such as physics, astrophysics and cosmology which are dominated by a wide variety of computationally expensive and non-differentiable models.
The improvement of energy efficiency of existing buildings is key for meeting 2030 and 2050 energy and CO2 emission targets. Thus, building simulation tools play a crucial role in evaluating the performance of energy retrofit actions, not only at present, but also under future climate scenarios.
A Bayesian calibration approach, combined with sensitivity analysis, is applied to reduce the discrepancies between measured and simulated hourly indoor air temperatures. Calibration is applied to a test cell case study developed using the EnergyPlus building simulation software. Several scenarios are evaluated to determine how different variables may impact the calibration process: orientations, activation of mechanical ventilation, different blind aperture levels, etc. Uncertainties associated with model inputs (fixed parameters in the energy model), model discrepancies due to physical limitations of the building energy model (simplifications when compared to the real performance of the building), errors in field observations and noisy measurements were also accounted for.
Even though uncalibrated models were within the uncertainty ranges specified by the ASHARE Guidelines, pre-calibration simulation outputs over-predicted measurements up to 3.2 ºC. After calibration, the average maximum temperature difference was reduced to 0.68 ºC, improving the results by almost 80%. Thus, these techniques are proven to improve the level of agreement between on-site measurements and simulated outputs. Besides, the implementation of this methodology is useful for calibrating and validating indoor hourly temperatures and, consequently, provide adequate results for thermal comfort assessment.
With a few lines of Python code, it is possible to train a neural network over a data-set and then use it to make extrapolations.
These techniques are routinely used in particle physics and condensed matter and only recently some pioneering work has been done to apply them to the nuclear physics case.
Given the incredible potential of these techniques, is it still necessary to invest time to build complex nuclear models to do the same thing?
Why not directly use machine learning algorithms to analyse the data?
In my talk, I will present some applications of machine learning techniques to the case of nuclear masses: by using either neural networks [1] or Gaussian Process Emulators [2] I will show how to use these algorithms to reproduce this particular observable. In particular I will consider the case of a neural network to reproduce nuclear masses without any underlying model and with a model to improve performances.
This procedure may be very helpful: in the short term, it will help us detect possible trends in the data and eventually perform reliable predictions in nearby regions of the nuclear chart; in the long term, by interpreting the algorithms, we may learn what is the missing physics in the nuclear model we currently use.
[1] Pastore, A., & Carnini, M. (2021). Extrapolating from neural network models: a cautionary tale. Journal of Physics G: Nuclear and Particle Physics
[2]Shelley, M., & Pastore, A. (2021). A new mass model for nuclear astrophysics: crossing 200 keV accuracy. Universe, 7(5), 131