Uniform International Tax Collection and Distribution for Global Development, a Utopian BEPS Alternative Abstract
Fairness in International Taxation Symposium
University of Surrey
Henry Ordower
Saint Louis University School of Law

Under the guise of compelling multinational enterprises (MNEs) to pay their fair share of income taxes, the OECD and other multinational agencies introduced proposals to prevent MNEs from eroding the income tax base of developed economies by continuing to shift income artificially to low or zero tax jurisdictions. Some of the proposals garnered substantial multinational support, most recently adopted by 136 (7) OECD countries, including recent support from the U.S. presidential administration for a global minimum tax. This Article reviews many of those international proposals.
The proposals tend to concentrate the incremental tax revenue from the prevention of base erosion into the treasuries of the developed economies although the minimum tax proposal known as GloBE encourages low tax countries to adopt the minimum rate. The likelihood that zero tax countries will transition successfully to imposing the minimum tax seems uncertain.

Developed economies lack a compelling moral claim to incremental revenue so this Article argues that collecting a fair tax from MNEs and other taxpayers should be a goal that is independent of claims on that revenue. This Article maintains that to prevent tax base erosion, the income tax base and administration must be uniform across national borders and the Article recommends applying uniform rules administered by an international taxing agency. The Article explores the convergence of tax rules under such an international taxing agency.
The Article illustrates the problem of uniform tax collection and distribution with a regional example of school funding in St. Louis County, Missouri, USA. Through that mechanism, the Article presents the unnecessary and unfair manner in which some districts capture a disproportional share of revenue and deploy it to provide higher quality education in their communities, leaving other communities far behind.

Distribution of tax revenue by the international agency should follow contextualized need. In addressing the conundrum of absolute poverty in the undeveloped and developing world vis á vis relative poverty in the developed world, the Article proposes that the taxing agency should distribute all incremental revenue from the uniform tax where the need is greatest to ameliorate absolute poverty and improve living standards without regard to income source. The location of income production, destination of the produced goods and services generating the income, and residence of the income producers should not determine the tax revenue distribution. Rather, the use of contextualized need for distribution determination will enable developed economies to receive sufficient revenue to maintain their existing infrastructures and governmental services. Developed economies should forego new revenue, for which they have not budgeted, in favor of improving worldwide living conditions for all. The proposals for uniform, worldwide taxation and revenue sharing based on contextualized need are admittedly aspirational and utopian but designed to encourage debate on sharing of resources in our increasingly globalized world.

REEVALUATING THE ALLOCATION OF TAX COLLECTION OF IMMIGRANTS BETWEEN HOME COUNTRY AND HOST COUNTRY
TAMIR SHANAN AND DORON NAROTZKI

ABSTRACT
In 1972, when Professor Jagdish Bhagwati published his seminal proposal “The Brain Drain and Income Taxation, a Proposal,” his fundamental idea was to tax skilled workers who had emigrated from developing countries to developed countries and return at least some of the income to developed countries for their economic loss. Professor Bhagwati underlying rationale for this tax was the need to compensate developing countries for the losses those countries experienced by individuals who were born, raised, and often times professionally trained there but eventually left to developed countries in order to find a more lucrative employment opportunities (higher salaries, better working conditions, etc.) and improve their standard of living (more stable lives in the developed countries and better educational opportunities for the migrant’s children).
This basic idea of the so-called “brain drain tax” is that skilled migrants typically earn economic rents, that rely on skills and know-how which they received in their home-country (especially when the training and education relies on state funding) and that due to their relocation benefit the host-country which did not invest any of its own resources in order to receive skilled professionals. Furthermore, such relocation of skilled professional from developing countries to developed countries also results in shortages of skilled professional in the developing countries and as a result put those in another disadvantage.
Professor Bhagwati proposal was aimed mainly at promoting global fairness between developing countries and developed countries and focused on the phenomena of skilled migrants leaving developing countries and moving to developed countries. However, our research wishes to further develop this idea, and explore the jurisdiction to tax individuals and more specifically the fundamental principle of “residency” under the existing international norms. Accordingly, our research would explore migration economic and tax implications in general, regarding skilled and unskilled migrants and regarding migration from one country to another (not necessarily from developing countries to developed countries), and eventually suggest a model that will assist countries with ways to tax those individuals in a more fair manner that will lean on social justice and social contracts and ties between the individual and her domiciliary community, and not solely between countries or on technical standards as it is currently.
One of the challenges in implementing such a tax is the fact that under the current international tax regime the taxing jurisdiction follows residence and that many countries define residency for tax purposes based on one version or another of a physical presence test (presence of more than 183 days in one country during the calendar year), or based on the place where the migrant’s habitual abode was in the relevant calendar year. As such, under existing rules, and more often than not, immigrants are considered residents of the host-country rather than of their home-country. One possible solution to this challenge is in strengthening the domiciliary concept which interprets the notion of “home” in the country where the individual resides permanently without any intention of moving. For instance, under this concept, an immigrant who studied a graduate degree in a host-country who decides to work for several years after graduating, does not cease to have his permanent home in his home-country merely because he is temporarily residing elsewhere. Another possible solution to this challenge can be achieved in an alternative personal jurisdiction regime – adoption of a citizenship-based taxation.
The need to compensate home countries, whose citizens relocate and move to another country, from the loss of untaxed unrealized gains was addressed by many countries that adopted exit taxes. These exit taxes attempt to capture unrealized untaxed appreciated gains of assets based on their appreciation during the period the individuals owned the property just before she or he abandoned her or his tax residency or just before she or he renounced her or his citizenship. However, these exit taxes unfortunately do not capture the human capital appreciation since at the time of migration emigrants generally do not benefit from the increase of wages, and in any event many of the economic benefits that derive from the know-how, intellectual property they acquired/developed prior to the relocation can easily be deferred, and also because the “appreciation” period (unlike the holding period of a movable property) is less explicit and as such pose greater collection challenges for the home countries.
Our research will explore the different proposals raised by legal and tax scholars over the years regarding brain drain tax and propose a model that would try to capture unrealized and untaxed economic “rent” that derives from know-how and skills that may be attributed to their home-countries. We would also compare between the U.K. domiciliary-based regime and the U.S. citizenship-based regime and propose a model that can be relatively easily adopted, administered and monitored by the home-countries and that will enhance global fairness between countries.

Abstract: This article presents an analysis of the regulation of transfer prices, specifically inquiring about the purpose of establishing this obligation with tax havens. In this regard, it is argued that the application of this regime with tax havens has the purpose of disciplining and punishing States that resist entering neoliberal economic globalization. To explain this thesis, an analysis is carried out from the analytical Marxism of Gerald Cohen on the international institutions that advocate full competition and unify tax regulations, such is the case of the OECD.

Keywords: Transfer pricing, multinationals, analytical Marxism, tax havens, globalization.

Proposal
Economic globalization is a long-standing process, but one that in our recent history is marked by the expansion of neoliberalism and the legal and economic adaptation of States to a unified global order around freedom of capital and free competition. This translates into constitutional reforms and the enactment of organic laws that adjust the institutional system of each State based on recommendations from international institutions. The key to this process is to “unify” the treatment of private capital with respect to its free movement and guarantees, while adapting State action to the control of other types of economies that distort the neoliberal model.
In this general context, this article aims to address this problem in relation to the transfer pricing regime, which is applied in international tax auditing and is intended to prevent the erosion of the tax bases of the States involved in operations between economic linked and tax havens. At least that is how it is presented to us from the OECD and the tax authority, but as it is problematized in this writing, this purpose is partially true. Because it is argued that the application of this regime with tax havens has the purpose of disciplining and punishing States that resist entering neoliberal economic globalization. This thesis is reached by answering the following research question:
What is the political/economic purpose of establishing the transfer pricing obligation with tax havens without having an economic link from the critical perspective of Gerald Cohen?
This concern is the center of the academic dissertation presented in this proposal, which is developed in three sections, the first on the arm’s length principle where the global economic order is characterized, the OECD regulation in this regard is named, and some glosses are made on the transnational economic power of companies from the critical opinion of Gerald Cohen; In the second section, the transfer pricing obligation is addressed, explaining its usefulness and application, how it works in practice in the Colombian legal system, and the paradigm of “erosion of taxable bases” in tax havens is problematized. Finally, in the third section, a comparison is made of the purpose of transfer prices between economic link and tax havens, with the aim of finding their role in economic globalization and sustainable development.

Corporate Tax: We Tried the Stick, How About the Carrot?
Doron Narotzki
Abstract
Due to corporations’ on-going focus on tax planning and their continuous efforts finding new tax minimization strategies, multinational corporations are not paying their “fair share” in taxes for a long time now, and as a result the corporate tax law is unable to generate much tax revenue. This is a global phenomenon, and by no means only a U.S. domestic problem. Governments response to this problem has always been the same and almost single dimensioned: introducing new tax laws and regulations, revising old tax laws in order to shut-down the so-called “loop-holes” and hoping this will put an end to the problem of corporate tax evasion. For decades this approach has failed us. This paper examines the history of the corporate tax in the U.S, and the corporate tax avoidance industry which emerged into a global problem in the last 50 years, and finally suggests a new policy that aims to create an incentive for corporations to shift their focus and efforts away from abusive tax planning, and into investments in the economy. In the heart of this new policy is the recent Pillar 2 and the Global Minimum Corporate Tax that can and should be used in order to expand international cooperation between countries and as a tool for minimizing tax evasion. Pillar 2 is a truly historic moment for international and corporate tax, for the first time ever it helped set and define the corporate tax at a rate that is acceptable by (at least) 136 countries, and now is the time for countries to adopt a new way of thinking: offer this rate as the carrot for corporations who choose to act in certain ways, while still keeping the stick: the current (higher) corporate tax rate for those who choose not to adopt the new policy.

MCMC methods are a great tool for fitting data because they explore the whole parameter space and, more importantly, they are able to deliver uncertainties. Uncertainties are crucial because they show how reliable the fit is. When the fit looks reasonable and the uncertainties are not very high, you can claim that you were able to describe your data successfully. However, what happens when your fits do not look so good; either because the values are unrealistic in the context of the system you are studying, or because the uncertainties are too high for the fits to be reliable? Is it the data or the approach to fitting the data the cause of failing?

In this talk, I will talk about my personal experience using MCMC methods during the course of my PhD. Firstly, I will show through different examples how the physical interpretation of the fitted values and their uncertainties was crucial in solving problems in my research. Sometimes, failing to fit the data –especially due to high uncertainties– led me to new insights about system I was struggling to understand, even taking that project into a whole new direction. Secondly, I will talk about situations where I still struggle fit the data, and I will share my insights as to why.

Nowadays, the volume of science or engineering data has increased substantially, and a variety of models have been developed to help to understand the observations. Markov chain Monte Carlo (MCMC) has been established as the standard procedure of inferring these model parameters subject to the available data in a Bayesian framework. Real systems such as interacting galaxies require complex models and these models are computationally prohibitive. The goal of this project is to provide a flexible platform for connecting a range of efficient algorithms for any user-defined circumstances. It will also serve as a testbed for assessing new state-of-the-art model-fitting algorithms.

The most commonly used MCMC methods are variants of the Metropolis-Hastings (MH) algorithm. At the beginning of this project and in this article, the standard MH-MCMC algorithm together with affine-invariant ensemble MCMC, which has dominated astronomical analysis over the past few decades, has been tested to reveal the performance of each sampler for the problems with known solutions. The Hamiltonian Monte Carlo algorithm was also tested and it shows in which circumstance that it outperforms the other two.

In my talk, I will start by introducing the basics of Markov Chain Monte Carlo (MCMC) methods. I will start with the Metropolis and Gibbs samplers, and the proceed to the Hamiltonian Monte Carlo sampler. A key focus will be to describe the domains of applicability of each of these sampling methods and the difficulties they encounter when applied. I will then describe strategies to overcome some of the difficulties encountered with basic versions of these methods. I will also touch on output diagnostics, and determining when a sampler is working as desired. Finally, I will consider the case of sampling where the likelihood of a model is expensive to compute and how MCMC can be used in this situation. The latter case may be of interest in astronomy applications.

Various multiobjective optimization algorithms have been proposed with a common assumption that the evaluation of each objective function takes the same period of time. Little attention has been paid to more general and realistic optimization scenarios where different objectives are evaluated by different computer simulations or physical experiments with different time complexities (latencies) and only a very limited number of function evaluations is allowed for the slow objective. In this work, we investigate benchmark scenarios with two objectives. We propose a transfer learning scheme within a surrogate-assisted evolutionary algorithm framework to augment the training data for the surrogate for the slow objective function by transferring knowledge from the fast one. Specifically, a hybrid domain adaptation method aligning the second-order statistics and marginal distributions across domains is introduced to generate promising samples in the decision space according to the search experience of the fast one. A Gaussian process model based co-training method is adopted to predict the value of the slow objective and those having a high confidence level are selected as the augmented synthetic training data, thereby enhancing the approximation quality of the surrogate of the slow objective. Our experimental results demonstrate that the proposed algorithm outperforms existing surrogate and non-surrogate-assisted delay-handling methods on a range of bi-objective optimization problems. The approach is also more robust to varying levels of latency and correlation between the objectives.

Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning. However, Slice Sampling’s performance is highly sensitive to the user-specified initial length scale hyperparameter and the method generally struggles with poorly scaled or strongly correlated distributions. To this end, we introduce Ensemble Slice Sampling (ESS) and its Python implementation, zeus, a new class of algorithms that bypasses such difficulties by adaptively tuning the initial length scale and utilising an ensemble of parallel walkers in order to efficiently handle strong correlations between parameters. These affine-invariant algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments. Empirical tests show that Ensemble Slice Sampling can improve efficiency by more than an order of magnitude compared to conventional MCMC methods on a broad range of highly correlated target distributions. In cases of strongly multimodal target distributions, Ensemble Slice Sampling can sample efficiently even in high dimensions. We argue that the parallel, black-box and gradient-free nature of the method renders it ideal for use in scientific fields such as physics, astrophysics and cosmology which are dominated by a wide variety of computationally expensive and non-differentiable models.

The improvement of energy efficiency of existing buildings is key for meeting 2030 and 2050 energy and CO2 emission targets. Thus, building simulation tools play a crucial role in evaluating the performance of energy retrofit actions, not only at present, but also under future climate scenarios.
A Bayesian calibration approach, combined with sensitivity analysis, is applied to reduce the discrepancies between measured and simulated hourly indoor air temperatures. Calibration is applied to a test cell case study developed using the EnergyPlus building simulation software. Several scenarios are evaluated to determine how different variables may impact the calibration process: orientations, activation of mechanical ventilation, different blind aperture levels, etc. Uncertainties associated with model inputs (fixed parameters in the energy model), model discrepancies due to physical limitations of the building energy model (simplifications when compared to the real performance of the building), errors in field observations and noisy measurements were also accounted for.
Even though uncalibrated models were within the uncertainty ranges specified by the ASHARE Guidelines, pre-calibration simulation outputs over-predicted measurements up to 3.2 ºC. After calibration, the average maximum temperature difference was reduced to 0.68 ºC, improving the results by almost 80%. Thus, these techniques are proven to improve the level of agreement between on-site measurements and simulated outputs. Besides, the implementation of this methodology is useful for calibrating and validating indoor hourly temperatures and, consequently, provide adequate results for thermal comfort assessment.