Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Page Not Found

Page not found. Your pixels are in another canvas.

About me

About me

Archive Layout with Content

Posts by Category

Posts by Collection

CV

Markdown

Page not in menu

This is a page not in th emain menu

Page Archive

Portfolio

Publications

Sitemap

Posts by Tags

Talk map

Talks and presentations

Teaching

Terms and Privacy Policy

Blog posts

Jupyter notebook markdown generator

Posts

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

Portfolio item number 1

Short description of portfolio item number 1

Portfolio item number 2

Short description of portfolio item number 2

publications

Testing and Learning on Distributions with Symmetric Noise Invariance

H. Law, C. Yau, D. Sejdinovic

Advances in Neural Information Processing Systems (NeurIPS), 2017

Construct invariant features of distributions, leading to testing and learning algorithms robust to the impairment of the input distributions with symmetric additive noise. These features lend themselves to a straight forward neural network approach, and can also be easily implemented in many algorithms.

Paper Software Video

Bayesian Approaches to Distribution Regression

H. Law*, D. Sutherland*, D. Sejdinovic, S. Flaxman

Artificial Intelligence and Statistics (AISTATS), 2018

Construct a Bayesian distribution regression formalism that accounts for bag size uncertainty, improving the robustness and performance of existing models. The models proposed can be framed in a neural network-style, and we demonstrate its performance on the IMDb-WIKI image dataset for celebrity age classification.

Paper Software

A Differentially Private Kernel Two-Sample Test

A. Raj*, H. Law*, D. Sejdinovic, M. Park

Preprint, 2018

Kernel two-sample testing is a useful statistical tool in determining whether data samples arise from different distributions without imposing any parametric assumptions on those distributions. However, raw data samples can expose ensitive information about individuals who participate in scientific studies, which makes the current tests vulnerable to privacy breaches. Hence, we design a new framework for kernel two-sample testing conforming to differential privacy constraints, in order to guarantee the privacy of subjects in the data.

Paper

Variational Learning on Aggregate Outputs with Gaussian Processes

H. Law, D. Sejdinovic, E. Cameron, T. CD Lucas, S. Flaxman, K. Battle, K. Fukumizu

Advances in Neural Information Processing Systems (NeurIPS), 2018

We construct an approach to learning from aggregation of outputs based on variational learning with Gaussian processes. In particular, we propose new bounds and tractable approximations, leading to improved prediction accuracy and scalability to large datasets, while explicitly taking uncertainty into account. We apply our framework to a challenging and important problem, the fine-scale spatial modelling of malaria incidence, with over 1 million observations.

Paper Software Video

Hyperparameter Learning via Distributional Transfer

H. Law, P. Zhao, J. Huang, D. Sejdinovic

Advances in Neural Information Processing Systems (NeurIPS), 2019

Bayesian optimisation is a popular technique for hyperparameter learning but typically requires initial exploration even in cases where potentially similar prior tasks have been solved. We propose to transfer information across tasks using kernel embeddings of distributions of training datasets used in those tasks. The resulting method has a faster convergence compared to existing baselines, in some cases requiring only a few evaluations of the target objective.

Paper

talks

Talk 1 on Relevant Topic in Your Field

Published:

This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!

Conference Proceeding talk 3 on Relevant Topic in Your Field

Published:

This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.