In the last forty years or so, the tax system in the US has reinforced extreme economic inequality. The three progressive taxes that have been created to compensate for such inequalities are the individual income tax, the corporate income tax, and the estate tax. However, they all have weakened in recent years. As a consequence, the government has almost no tools left to redistribute wealth from the top to the bottom. The absence of a strong progressive taxation system takes place in a background situation in which the US is more unequal than ever.
Here I will defend a justification of a wealth tax. The justification I propose is different from traditional justifications, and it avoids some of the problems they have. Specifically, my justification is sensitive to the fact that a wealth tax (and, indeed, many other taxes) might burden agents who should not be burdened with this tax (or, who should not be burdened as much as other agents).
In the existing literature, a wealth tax has mainly been defended on the ground that it is incompatible with the core liberal value of equality. I will call this defense outcome-based. This defense has two different versions.
First, the fact that just a few people concentrate so much wealth, while the vast majority have little or nothing, undermines equality. The government should therefore implement a wealth tax. The wealth tax will simply compensate for the fact that there are very rich and very poor people. A society that prides itself of being egalitarian simply can’t accept the big wealth gaps that we see in current societies.
Second, those who have too much money can use it to unfairly tailor rules of society in their favor. For wealthier people it is easier, tempting and feasible to shape the rules of the game in their favor, and the new rules that they create will, in turn, presumably increase their wealth even further. Whenever this happens, the core value of equality is also undermined, but in a different way. The problem here is that the already existing economic inequality creates the possibility of unequal access to political participation. This is morally unacceptable, as members of a political society should have equal access to the decision process that affects them. The purpose of the wealth tax would be, precisely, to prevent this kind of interference from happpening. By becoming less wealthy, their influence will weaken.
The outcome-based approach is a powerful strategy to justify a wealth tax (or any tax), but it has serious limitations. The main shortcoming of the justification of the wealth based on outcomes is that it ignores or neglects the question of how taxed wealth was created in the first place and, as a consequence, it ends up burdening agents who do not deserve to be burdened. Consider the case of someone who inherited a huge amount of money, versus someone who has obtained the wealth by profitting from structural market injustices. In the former case, the heir has to pay the estate tax (if the tax exists) and the regular wealth tax. In the latter case, the taxpayer will have to pay the wealth tax only, at the same rate as in the former case. The direct application of the outcome approach is obvious: the burden of paying the tax will be unfairly distributed among taxpayers. Because of these limitations, the outcome approach will have to be complemented with what I call a procedural approach. The outcome approach, combined with the procedural approach, is a better strategy to justify a reasonable and fair wealth tax.
The main intuition that underlies the procedural approach is that wealth that results from unjust transactions is undeserved, and therefore those who benefit from these kinds of transactions should not be taxed the same way as those who obtained their wealth through just transactions. Unjust transactions can be, for example, transactions that result from exploitation, domination, or unjust structural market conditions. Defending the procedural version of the tax requires the following steps: first, defining “unfair” transactions. Second, explaining how the procedural approach strengthens the outcome-based approach. Third, discussing its feasibility in public policy.
Significant/minimal digital/human involvement/intervention/presence – the never/ending tax/story
Fairness is one of the most challenging words in the tax world. At the same time, it is one of the most indeterminate due to its diverse perceptions. It is difficult to estimate what is fair both from objective and subjective perspective. Fairness is symbiosis of many social factors, of the development of society, of the interaction between the different states at international level. It changes over the years to be in line with the practical needs.
In recent years, we have witnessed many international initiatives by the Organisation of Economic Development and Co-operation (OECD), the United Nations (UN) and the European Council (EC), whose main goal is to combat profit shifting and to find the balance. This is particularly noticeable through the prism of the digital economy. Not only is it no longer a novelty in our daily lives, but it can even be defined as an integral part. Despite the many proposals, it is difficult to reach a final decision on the taxation of the digital economy. OECD addressed this issue in its Action Plan 1 of the Base Erosion and Profit Shifting (BEPS) project. This is also reflected in the EC’s proposal for “significant digital presence”. Pillar 1 can also be included as relevant. UN has proposed an entirely new art. 12B on income from automated digital services. Digital services taxes have already been introduced in the legislation of several states. Nowadays, however, their nature may be construed as disputable and even controversial. From VAT perspective, EC has made a number of proposals for new place of supply rules services relating to virtual activities. All these initiatives are welcoming ideas on this issue.
For the purposes of this paper, attention will be paid to specific aspect that is particularly important (or not) for the digital economy – the human presence. In this regard, it is necessary to consider what is its function through the prism of direct and indirect taxes.
At EU level, the proposal for significant digital presence, known as the “digital PE”, may be mentioned. Also, Art. 12B, para 4 UN-MC is intriguing. It contains the expression “minimal human involvement”. From VAT perspective, Art. 7, para 1 VAT Implementing Regulation the “minimal human intervention” is used.
In this regard, it is interesting to consider the role of the expressions used, as well as their similarities and differences. Therefore, the author will try to answer the following questions. First, what is the “minimum” and where is the “maximum” of the human presence (if any) and the determination of their threshold (if any)? Second, is it always necessary to talk about humans in the digital economy? Third, is there a difference between human “involvement” and “intervention” or are they synonyms? Last but not least, it is intriguing to consider whether a criterion that is universally valid for both direct and indirect taxes can be used. This is especially important for some key concepts such as the permanent establishment and the fixed establishment. At this stage, in both cases, their digital manifestation is unlikely for various different reasons. However, this topic will be subject of further analysis in the future.
Finally, conclusions will be drawn about the real impact of humans and their intervention. If they are not “applicable” at all for the purposes of the digital economy, then do we need their further tax analysis? Conversely, if they have a definite role, then where is the line between the physical and the digital aspects?
The author has chosen this issue because it is vital for the digital economy both from direct and indirect taxes perspective and from theoretical and practical point of view. His analysis will provide guidance on what awaits us in the future on this issue, as well as whether common approaches for the interpretation of the above mentioned terms can be found. It would also answer the more philosophical question of how much humans will influence the proper tax treatment. This is becoming an increasingly significant topic due to the increasing use of robots and space.
A prominent theme in the discourse of international taxation is that taxing rights should follow wealth production. In considering the validity of this proposition, the paper will rely on the familiar dichotomy in moral philosophy between the right and the good. In the context of international taxation, the right involves a host country’s deontological claim to receive a portion of the income produced within its borders. The good involves the claim that host countries need revenue from multinational enterprises (MNEs) to fund public goods. Although the literature often conflates these two claims, they are distinct and require separate analysis.
Within the realm of the right, we must make a further distinction between two different types of right-based claims. On the one hand, a host country may assert that MNEs who choose to operate in its territory take upon themselves an implicit contractual obligation to pay tax as delineated in the host country’s laws. When the host country imposes an income tax, MNEs are in effect contractually obligated to pay the host country a percentage of the income generated by their economic activity in the host country. Alternatively, the host country may assert a neo-Lockean claim to a commensurate share of the wealth that its social capital – in the broadest possible sense of the term – helped to create.
Regarding the contractual claim, I argue that the terms of the contract are in almost all cases delineated by the host country’s tax legislation. In effect the host country offers a standard-form contract to foreign entities, which then signify their assent by investing or otherwise operating in the host country’s territory. Consequently, if the terms of the agreement are difficult to enforce, the most obvious response would be to adopt terms that are more easily enforceable. I posit that the reason host countries do not do so is because a stricter tax regime would make it difficult to compete for international investments against countries whose tax systems are easier to manipulate. In other words, the so-called “loopholes” are actually part and parcel of the implicit contractual arrangement between the host country and the MNE.
The neo-Lockean argument is that creation of wealth within a country’s borders is effectively a joint project involving the exploitation of the MNE’s resources along with the social capital – in the broadest sense of the term – of the host country. Under neo-Lockean theory, the host country is entitled to a share of the income commensurate with its contribution to the production of that wealth, and income tax is the means by which it asserts that right. Profit shifting by MNEs understates the wealth actually created within the host country’s territory and prevents the host country from claiming its fair share of that income. I contend that this argument too does not succeed. First, from the mere fact that an MNE derives wealth from its operations in the territory of a certain country, it does not necessarily follow that the host country’s social capital contributes in any meaningful way to the production of that wealth. Second, even when there is reliance upon the social capital of the host country, the MNE will in most cases pay for its exploitation of the host country’s social capital via factor prices (particularly salaries and rent). Third, to the extent that the positive contribution of its social capital is not reflected in factor prices, the host country should be able effectively to impose tax on foreign entities. Its desire for more MNE tax revenue than it is capable of collecting in a competitive atmosphere constitutes at least prima facia evidence that it wants more than its actual contribution to the creation of wealth.
Moving from the right to the good, it is often asserted that budgetary exigencies of host countries require that they collect taxes from MNEs and that without such revenue their ability to supply essential public good would be seriously curtailed. However, this utilitarian claim does nothing to support the proposition that taxing rights should follow the production of wealth. In allocating taxing rights under the umbrella of the good, it is needs and the capacity to meet those needs that should dictate taxing power. To which of any number countries the international tax regime should grant the power to tax a particular MNE’s income in the name of the good would be a function of the extent to which granting the taxing power to any particular country would promote total human happiness. The location of wealth production is irrelevant from this perspective.
The paper concludes by considering why the principle that taxing rights should follow value creation has gained such prominence in the discourse on international taxation. I speculate that what actually motivates countries is a parochial concept of the good in which the welfare of their constituents takes precedence over the welfare of others. However, as it is difficult to seek international cooperation to implement such a principle, they instead attempt to justify their position in terms of an objective principle, even if that principle ultimately lacks a normative justification.
Uniform International Tax Collection and Distribution for Global Development, a Utopian BEPS Alternative Abstract
Fairness in International Taxation Symposium
University of Surrey
Henry Ordower
Saint Louis University School of Law
Under the guise of compelling multinational enterprises (MNEs) to pay their fair share of income taxes, the OECD and other multinational agencies introduced proposals to prevent MNEs from eroding the income tax base of developed economies by continuing to shift income artificially to low or zero tax jurisdictions. Some of the proposals garnered substantial multinational support, most recently adopted by 136 (7) OECD countries, including recent support from the U.S. presidential administration for a global minimum tax. This Article reviews many of those international proposals.
The proposals tend to concentrate the incremental tax revenue from the prevention of base erosion into the treasuries of the developed economies although the minimum tax proposal known as GloBE encourages low tax countries to adopt the minimum rate. The likelihood that zero tax countries will transition successfully to imposing the minimum tax seems uncertain.
Developed economies lack a compelling moral claim to incremental revenue so this Article argues that collecting a fair tax from MNEs and other taxpayers should be a goal that is independent of claims on that revenue. This Article maintains that to prevent tax base erosion, the income tax base and administration must be uniform across national borders and the Article recommends applying uniform rules administered by an international taxing agency. The Article explores the convergence of tax rules under such an international taxing agency.
The Article illustrates the problem of uniform tax collection and distribution with a regional example of school funding in St. Louis County, Missouri, USA. Through that mechanism, the Article presents the unnecessary and unfair manner in which some districts capture a disproportional share of revenue and deploy it to provide higher quality education in their communities, leaving other communities far behind.
Distribution of tax revenue by the international agency should follow contextualized need. In addressing the conundrum of absolute poverty in the undeveloped and developing world vis á vis relative poverty in the developed world, the Article proposes that the taxing agency should distribute all incremental revenue from the uniform tax where the need is greatest to ameliorate absolute poverty and improve living standards without regard to income source. The location of income production, destination of the produced goods and services generating the income, and residence of the income producers should not determine the tax revenue distribution. Rather, the use of contextualized need for distribution determination will enable developed economies to receive sufficient revenue to maintain their existing infrastructures and governmental services. Developed economies should forego new revenue, for which they have not budgeted, in favor of improving worldwide living conditions for all. The proposals for uniform, worldwide taxation and revenue sharing based on contextualized need are admittedly aspirational and utopian but designed to encourage debate on sharing of resources in our increasingly globalized world.
REEVALUATING THE ALLOCATION OF TAX COLLECTION OF IMMIGRANTS BETWEEN HOME COUNTRY AND HOST COUNTRY
TAMIR SHANAN AND DORON NAROTZKI
ABSTRACT
In 1972, when Professor Jagdish Bhagwati published his seminal proposal “The Brain Drain and Income Taxation, a Proposal,” his fundamental idea was to tax skilled workers who had emigrated from developing countries to developed countries and return at least some of the income to developed countries for their economic loss. Professor Bhagwati underlying rationale for this tax was the need to compensate developing countries for the losses those countries experienced by individuals who were born, raised, and often times professionally trained there but eventually left to developed countries in order to find a more lucrative employment opportunities (higher salaries, better working conditions, etc.) and improve their standard of living (more stable lives in the developed countries and better educational opportunities for the migrant’s children).
This basic idea of the so-called “brain drain tax” is that skilled migrants typically earn economic rents, that rely on skills and know-how which they received in their home-country (especially when the training and education relies on state funding) and that due to their relocation benefit the host-country which did not invest any of its own resources in order to receive skilled professionals. Furthermore, such relocation of skilled professional from developing countries to developed countries also results in shortages of skilled professional in the developing countries and as a result put those in another disadvantage.
Professor Bhagwati proposal was aimed mainly at promoting global fairness between developing countries and developed countries and focused on the phenomena of skilled migrants leaving developing countries and moving to developed countries. However, our research wishes to further develop this idea, and explore the jurisdiction to tax individuals and more specifically the fundamental principle of “residency” under the existing international norms. Accordingly, our research would explore migration economic and tax implications in general, regarding skilled and unskilled migrants and regarding migration from one country to another (not necessarily from developing countries to developed countries), and eventually suggest a model that will assist countries with ways to tax those individuals in a more fair manner that will lean on social justice and social contracts and ties between the individual and her domiciliary community, and not solely between countries or on technical standards as it is currently.
One of the challenges in implementing such a tax is the fact that under the current international tax regime the taxing jurisdiction follows residence and that many countries define residency for tax purposes based on one version or another of a physical presence test (presence of more than 183 days in one country during the calendar year), or based on the place where the migrant’s habitual abode was in the relevant calendar year. As such, under existing rules, and more often than not, immigrants are considered residents of the host-country rather than of their home-country. One possible solution to this challenge is in strengthening the domiciliary concept which interprets the notion of “home” in the country where the individual resides permanently without any intention of moving. For instance, under this concept, an immigrant who studied a graduate degree in a host-country who decides to work for several years after graduating, does not cease to have his permanent home in his home-country merely because he is temporarily residing elsewhere. Another possible solution to this challenge can be achieved in an alternative personal jurisdiction regime – adoption of a citizenship-based taxation.
The need to compensate home countries, whose citizens relocate and move to another country, from the loss of untaxed unrealized gains was addressed by many countries that adopted exit taxes. These exit taxes attempt to capture unrealized untaxed appreciated gains of assets based on their appreciation during the period the individuals owned the property just before she or he abandoned her or his tax residency or just before she or he renounced her or his citizenship. However, these exit taxes unfortunately do not capture the human capital appreciation since at the time of migration emigrants generally do not benefit from the increase of wages, and in any event many of the economic benefits that derive from the know-how, intellectual property they acquired/developed prior to the relocation can easily be deferred, and also because the “appreciation” period (unlike the holding period of a movable property) is less explicit and as such pose greater collection challenges for the home countries.
Our research will explore the different proposals raised by legal and tax scholars over the years regarding brain drain tax and propose a model that would try to capture unrealized and untaxed economic “rent” that derives from know-how and skills that may be attributed to their home-countries. We would also compare between the U.K. domiciliary-based regime and the U.S. citizenship-based regime and propose a model that can be relatively easily adopted, administered and monitored by the home-countries and that will enhance global fairness between countries.
Abstract: This article presents an analysis of the regulation of transfer prices, specifically inquiring about the purpose of establishing this obligation with tax havens. In this regard, it is argued that the application of this regime with tax havens has the purpose of disciplining and punishing States that resist entering neoliberal economic globalization. To explain this thesis, an analysis is carried out from the analytical Marxism of Gerald Cohen on the international institutions that advocate full competition and unify tax regulations, such is the case of the OECD.
Keywords: Transfer pricing, multinationals, analytical Marxism, tax havens, globalization.
Proposal
Economic globalization is a long-standing process, but one that in our recent history is marked by the expansion of neoliberalism and the legal and economic adaptation of States to a unified global order around freedom of capital and free competition. This translates into constitutional reforms and the enactment of organic laws that adjust the institutional system of each State based on recommendations from international institutions. The key to this process is to “unify” the treatment of private capital with respect to its free movement and guarantees, while adapting State action to the control of other types of economies that distort the neoliberal model.
In this general context, this article aims to address this problem in relation to the transfer pricing regime, which is applied in international tax auditing and is intended to prevent the erosion of the tax bases of the States involved in operations between economic linked and tax havens. At least that is how it is presented to us from the OECD and the tax authority, but as it is problematized in this writing, this purpose is partially true. Because it is argued that the application of this regime with tax havens has the purpose of disciplining and punishing States that resist entering neoliberal economic globalization. This thesis is reached by answering the following research question:
What is the political/economic purpose of establishing the transfer pricing obligation with tax havens without having an economic link from the critical perspective of Gerald Cohen?
This concern is the center of the academic dissertation presented in this proposal, which is developed in three sections, the first on the arm’s length principle where the global economic order is characterized, the OECD regulation in this regard is named, and some glosses are made on the transnational economic power of companies from the critical opinion of Gerald Cohen; In the second section, the transfer pricing obligation is addressed, explaining its usefulness and application, how it works in practice in the Colombian legal system, and the paradigm of “erosion of taxable bases” in tax havens is problematized. Finally, in the third section, a comparison is made of the purpose of transfer prices between economic link and tax havens, with the aim of finding their role in economic globalization and sustainable development.
Corporate Tax: We Tried the Stick, How About the Carrot?
Doron Narotzki
Abstract
Due to corporations’ on-going focus on tax planning and their continuous efforts finding new tax minimization strategies, multinational corporations are not paying their “fair share” in taxes for a long time now, and as a result the corporate tax law is unable to generate much tax revenue. This is a global phenomenon, and by no means only a U.S. domestic problem. Governments response to this problem has always been the same and almost single dimensioned: introducing new tax laws and regulations, revising old tax laws in order to shut-down the so-called “loop-holes” and hoping this will put an end to the problem of corporate tax evasion. For decades this approach has failed us. This paper examines the history of the corporate tax in the U.S, and the corporate tax avoidance industry which emerged into a global problem in the last 50 years, and finally suggests a new policy that aims to create an incentive for corporations to shift their focus and efforts away from abusive tax planning, and into investments in the economy. In the heart of this new policy is the recent Pillar 2 and the Global Minimum Corporate Tax that can and should be used in order to expand international cooperation between countries and as a tool for minimizing tax evasion. Pillar 2 is a truly historic moment for international and corporate tax, for the first time ever it helped set and define the corporate tax at a rate that is acceptable by (at least) 136 countries, and now is the time for countries to adopt a new way of thinking: offer this rate as the carrot for corporations who choose to act in certain ways, while still keeping the stick: the current (higher) corporate tax rate for those who choose not to adopt the new policy.
Test
Testing form.
MCMC methods are a great tool for fitting data because they explore the whole parameter space and, more importantly, they are able to deliver uncertainties. Uncertainties are crucial because they show how reliable the fit is. When the fit looks reasonable and the uncertainties are not very high, you can claim that you were able to describe your data successfully. However, what happens when your fits do not look so good; either because the values are unrealistic in the context of the system you are studying, or because the uncertainties are too high for the fits to be reliable? Is it the data or the approach to fitting the data the cause of failing?
In this talk, I will talk about my personal experience using MCMC methods during the course of my PhD. Firstly, I will show through different examples how the physical interpretation of the fitted values and their uncertainties was crucial in solving problems in my research. Sometimes, failing to fit the data –especially due to high uncertainties– led me to new insights about system I was struggling to understand, even taking that project into a whole new direction. Secondly, I will talk about situations where I still struggle fit the data, and I will share my insights as to why.
Nowadays, the volume of science or engineering data has increased substantially, and a variety of models have been developed to help to understand the observations. Markov chain Monte Carlo (MCMC) has been established as the standard procedure of inferring these model parameters subject to the available data in a Bayesian framework. Real systems such as interacting galaxies require complex models and these models are computationally prohibitive. The goal of this project is to provide a flexible platform for connecting a range of efficient algorithms for any user-defined circumstances. It will also serve as a testbed for assessing new state-of-the-art model-fitting algorithms.
The most commonly used MCMC methods are variants of the Metropolis-Hastings (MH) algorithm. At the beginning of this project and in this article, the standard MH-MCMC algorithm together with affine-invariant ensemble MCMC, which has dominated astronomical analysis over the past few decades, has been tested to reveal the performance of each sampler for the problems with known solutions. The Hamiltonian Monte Carlo algorithm was also tested and it shows in which circumstance that it outperforms the other two.
In my talk, I will start by introducing the basics of Markov Chain Monte Carlo (MCMC) methods. I will start with the Metropolis and Gibbs samplers, and the proceed to the Hamiltonian Monte Carlo sampler. A key focus will be to describe the domains of applicability of each of these sampling methods and the difficulties they encounter when applied. I will then describe strategies to overcome some of the difficulties encountered with basic versions of these methods. I will also touch on output diagnostics, and determining when a sampler is working as desired. Finally, I will consider the case of sampling where the likelihood of a model is expensive to compute and how MCMC can be used in this situation. The latter case may be of interest in astronomy applications.
Various multiobjective optimization algorithms have been proposed with a common assumption that the evaluation of each objective function takes the same period of time. Little attention has been paid to more general and realistic optimization scenarios where different objectives are evaluated by different computer simulations or physical experiments with different time complexities (latencies) and only a very limited number of function evaluations is allowed for the slow objective. In this work, we investigate benchmark scenarios with two objectives. We propose a transfer learning scheme within a surrogate-assisted evolutionary algorithm framework to augment the training data for the surrogate for the slow objective function by transferring knowledge from the fast one. Specifically, a hybrid domain adaptation method aligning the second-order statistics and marginal distributions across domains is introduced to generate promising samples in the decision space according to the search experience of the fast one. A Gaussian process model based co-training method is adopted to predict the value of the slow objective and those having a high confidence level are selected as the augmented synthetic training data, thereby enhancing the approximation quality of the surrogate of the slow objective. Our experimental results demonstrate that the proposed algorithm outperforms existing surrogate and non-surrogate-assisted delay-handling methods on a range of bi-objective optimization problems. The approach is also more robust to varying levels of latency and correlation between the objectives.
Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning. However, Slice Sampling’s performance is highly sensitive to the user-specified initial length scale hyperparameter and the method generally struggles with poorly scaled or strongly correlated distributions. To this end, we introduce Ensemble Slice Sampling (ESS) and its Python implementation, zeus, a new class of algorithms that bypasses such difficulties by adaptively tuning the initial length scale and utilising an ensemble of parallel walkers in order to efficiently handle strong correlations between parameters. These affine-invariant algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments. Empirical tests show that Ensemble Slice Sampling can improve efficiency by more than an order of magnitude compared to conventional MCMC methods on a broad range of highly correlated target distributions. In cases of strongly multimodal target distributions, Ensemble Slice Sampling can sample efficiently even in high dimensions. We argue that the parallel, black-box and gradient-free nature of the method renders it ideal for use in scientific fields such as physics, astrophysics and cosmology which are dominated by a wide variety of computationally expensive and non-differentiable models.
The improvement of energy efficiency of existing buildings is key for meeting 2030 and 2050 energy and CO2 emission targets. Thus, building simulation tools play a crucial role in evaluating the performance of energy retrofit actions, not only at present, but also under future climate scenarios.
A Bayesian calibration approach, combined with sensitivity analysis, is applied to reduce the discrepancies between measured and simulated hourly indoor air temperatures. Calibration is applied to a test cell case study developed using the EnergyPlus building simulation software. Several scenarios are evaluated to determine how different variables may impact the calibration process: orientations, activation of mechanical ventilation, different blind aperture levels, etc. Uncertainties associated with model inputs (fixed parameters in the energy model), model discrepancies due to physical limitations of the building energy model (simplifications when compared to the real performance of the building), errors in field observations and noisy measurements were also accounted for.
Even though uncalibrated models were within the uncertainty ranges specified by the ASHARE Guidelines, pre-calibration simulation outputs over-predicted measurements up to 3.2 ºC. After calibration, the average maximum temperature difference was reduced to 0.68 ºC, improving the results by almost 80%. Thus, these techniques are proven to improve the level of agreement between on-site measurements and simulated outputs. Besides, the implementation of this methodology is useful for calibrating and validating indoor hourly temperatures and, consequently, provide adequate results for thermal comfort assessment.
With a few lines of Python code, it is possible to train a neural network over a data-set and then use it to make extrapolations.
These techniques are routinely used in particle physics and condensed matter and only recently some pioneering work has been done to apply them to the nuclear physics case.
Given the incredible potential of these techniques, is it still necessary to invest time to build complex nuclear models to do the same thing?
Why not directly use machine learning algorithms to analyse the data?
In my talk, I will present some applications of machine learning techniques to the case of nuclear masses: by using either neural networks [1] or Gaussian Process Emulators [2] I will show how to use these algorithms to reproduce this particular observable. In particular I will consider the case of a neural network to reproduce nuclear masses without any underlying model and with a model to improve performances.
This procedure may be very helpful: in the short term, it will help us detect possible trends in the data and eventually perform reliable predictions in nearby regions of the nuclear chart; in the long term, by interpreting the algorithms, we may learn what is the missing physics in the nuclear model we currently use.
[1] Pastore, A., & Carnini, M. (2021). Extrapolating from neural network models: a cautionary tale. Journal of Physics G: Nuclear and Particle Physics
[2]Shelley, M., & Pastore, A. (2021). A new mass model for nuclear astrophysics: crossing 200 keV accuracy. Universe, 7(5), 131
Many networks such as communication, social media, covert and criminal networks have event-driven dynamics where the intensity rate of the events changes according to the occurrences of events in the network. In particular, events that occurred in a node of the network could increase the intensity of other nodes depending on their causal relationship. Thus, it is of interest to use data to uncover the influence network in which the edges represent the directional influence between nodes. An event-driven dynamic on a network can be modelled by a multi-dimensional Hawkes process driven by count data. In this setting, the influence structure of the network is then parameterized by the Hawke process. Understanding the uncertainty of the network constructed from the data is also important. This talks will discuss how we may build an ensemble of networks to reflect uncertainty. The outcome will facilitate downstream uncertainty analysis for network applications such as node classifications, link prediction and rare-event detection.
Quantifying model uncertainty and performing model selection within a Bayesian framework is becoming an ever-larger part of scientific analysis both within and outside of astronomy. I will present a brief introduction to Nested Sampling, a complementary framework to Markov Chain Monte Carlo approaches that is designed to estimate marginal likelihoods (i.e. Bayesian evidences) and posterior distributions, outline some of their pros and cons, and briefly discuss more recent extensions such as Dynamic Nested Sampling. I will also briefly highlight `dynesty`, an open-source Python package designed to make it easy for researchers to applying Nested Sampling approaches to various “black box” likelihoods present in their work.
Computational fluid dynamics (CFD) is a simulation technique widely used in chemical and process engineering applications. However, computation has become a bottleneck when calibration of CFD models with experimental data (also known as model parameter estimation) is needed. In this research, the kriging meta-modelling approach (also termed Gaussian process) was coupled with expected improvement (EI) to address this challenge. A new EI measure was developed for the sum of squared errors (SSE) which conforms to a generalised chi-square distribution and hence existing normal distribution-based EI measures are not applicable. The new EI measure is to suggest the CFD model parameter to simulate with, hence minimising SSE and improving match between simulation and
experiments. The usefulness of the developed method was demonstrated through a case study of a single-phase flow in both a straight-type and a convergent-divergent-type annular jet pump, where a single model parameter was calibrated with experimental data. This talk is based on a journal article we previously published in the AIChE Journal.
Since 2005 the UK higher education system has hosted a steadily increasing number of European students from outside the UK. EU students hold a widespread reputation of being capable and driven, and these qualities have made them desirable to UK universities. While their participation varies between institutions, and between the hierarchical layers of the sector, they have become recognised as a vital contributor to the diversity of the student fabric on UK campuses. However, following the United Kingdom’s exit from the European Union (commonly referred to as Brexit) EU students will soon pay higher fees in the UK and lose access to the UK’s pay later tuition loans. Additionally they will be subjected to visa requirements and their post-study stay will be constrained by migration rules. Consequent to these changes, among others, it is anticipated that the amount of EU students opting to study in UK universities may decline by up to half of their pre-Brexit numbers. These projections provide a window through which we can examine what the potential loss of European students would mean for institutions across the UK. To that end, this paper examines interviews conducted in 12 UK universities with 127 participants pre-Brexit, predominantly senior executives and members of academic leadership. The analysis uncovered a number of institutional representations of EU students that arose in response to Brexit, most often concerning: student numbers and income; diversity; and quality. Representations varied geographically across the different nations of the UK, largely due to the differences in funding regimes unique to each but also institutional hierarchies within a stratified higher education system. The specificities of institutional representations within each nation highlighted the differential impacts the loss of EU students may have on universities across the sector, with notable implications for: future recruitment strategies; intra- and international competition; the breadth and nature of subject and study programme offerings; the creation and maintenance of collaboration networks; and interactions between students, funding, and research.