We use cookies on this site. By browsing our site you agree to our use of cookies. Close this message Find out more

Home > Computer Science home > Events > Recent seminars 2011
More in this section Events

Recent seminars 2011

7 December 2011 Simultaneously Satisfying Linear Equations over F_2: MaxLin2 and Max-r-Lin2 Parameterized Above Average

Robert Crowston, Department of Computer Science, Royal Holloway, University of London

29 November 2011 Depth-Bounded Approximations to Logical Systems

Professor Marcello D'Agostino, Department of Human Sciences, University of Ferrara, Italy

Logic is informationally trivial and, yet, computationally hard.This is one of the most baffling paradoxes arising from the traditional account of logical consequence. Triviality stems from the widely accepted characterization of deductive inference as *non-ampliative*: the information carried by the conclusion is (in some sense) contained in the information carried by the premises. Computational hardness stems from the well-known results showing that most interesting logics are undecidable or, even when decidable, very likely to be intractable. This situation leads to the so-called *problem of logical omniscience*: if logic is informationally trivial, then a rational agent should always be aware that a certain sentence is a logical consequence of the data. However, this is clearly not the case when the logic in question lacks a feasible decision procedure.

15 November 2011 Combining initial segments of lists

Dr Wouter Koolen, Department of Computer Science, Royal Holloway, University of London

We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum.

In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearised notion of loss in this case.

We expect that this new way of aggregating lists will find many ranking applications.

10 May 2011

Prof Alexander Balinsky, Cardiff School of Mathematics, Cardiff University

On Semi-Supervised Learning and Compressive Sensing

In this talk we present a connection between semi-supervised learning and compressive sampling. We show that sparsity and compressibility of the learning function can be obtained from heavy-tailed distributions of filter responses or coefficients in spectral decomposition. In many cases the NP-hard problems of finding sparsest solutions can be replaced by l1-problems from convex optimization theory, which provide effective tools for semi-supervised learning. We present several conjectures and examples. As an example of the application of our results, we consider the colorization problem of natural images.

This is joint work with Helen Balinsky and Nassir Mohammad from Hewlett-Packard Labs.

3 May 2011 An identity for kernel ridge regression

Dr Yuri Kalnishkan, Department of Computer Science, Royal Holloway, University of London

Ridge regression is a popular machine learning technique with widespread applications. In the talk I will discuss ridge regression in the contexts of functional analysis (reproducing kernel Hilbert spaces) and the theory of random fields (Gaussian covariances) and derive an identity linking the quadratic losses of kernel ridge regression in batch and on-line frameworks. Some corollaries of the identity providing upper bounds on the cumulative loss of on-line ridge regression will also be discussed. An alternative proof of the identity motivated by the aggregating algorithm will be presented.

The results in the talk appeared in recent papers "An Identity for Kernel Ridge Regression" by F.Zhdanov and Y.Kalnishkan (Proceedings of ALT 2010) and "Competing with Gaussian linear experts" by F.Zhdanov and V.Vovk (abs/0910.4683).

11 April 2011 Domination When the Stars Are Out

Dr Matthias Mnich, International Computer Science Institute, Berkeley

We algorithmize the recent structural characterization for claw-free graphs by Chudnovsky and Seymour. Building on this result, we show that Dominating Set on claw-free graphs is (i) fixed-parameter tractable and (ii) even possesses a polynomial kernel. To complement these results, we establish that Dominating Set is not fixed-parameter tractable on the slightly larger class of graphs that exclude K_{1,4} as an induced subgraph. Our results provide a dichotomy for Dominating Set in K_{1,L}-free graphs and show that the problem is fixed-parameter tractable if and only if L â ¤ 3. Finally, we show that our algorithmization can also be used to show that the related Connected Dominating Set problem is fixed-parameter tractable on claw-free graphs.
 (Joint work with Danny Hermelin, Erik Jan van Leeuwen and Gerhard Woeginger)

23 March 2011 The computational analysis of micro-RNA's using next generation sequencing data in plants

Prof Ming Chen, Zhejiang University, People's Republic of China

22 March 2011 Parameters for fun and profit! Some new developments in parameter ecology

Prof Michael Fellows,Charles Darwin University, Australia

The talk will motivate the research program in parameterized complexity and algorithmics that has loosely become known as 'parameter ecology';. What is generally going on is that as the field advances, we are becoming ever-more adept at rich parameterizations of NP-hard problems. There are good practical reasons to explore such things. The mathematical techniques required are often fresh and interesting. Some recent results that are leading this area forward will be surveyed.

15 March 2011 Random Access to Grammar-Compressed Strings and Trees

Prof Rajeev Raman, University of Leicester

Modern applications involving processing textual and semi-structured data need to do fairly complex computations on data held in the main memory of a computer, which is a limited resource in many contexts. While data compression can reduce the size of data, it presents significant obstacles to operating on the compressed data, and new data structuring techniques need to be developed to overcome these obstacles. After an introduction to the issues, we will focus on one particular recent result.

Let S be a string of length N compressed into a context- free grammar S of size n. We present a representation of S achieving O(log N) random access time, and O(n) construction time and space on the RAM. We are able to generalize our results to navigation and other operations on grammar-compressed trees. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavy-paths in grammars.
[Although this will be summarizing some other papers, the main result above is joint with Bille, Landau, Satti, Sadakane and Weimann and some parts of this paper were presented at the ACM-SIAM SODA 2011.]
Rajeev Raman, http://www.cs.le.ac.uk/~rraman

Inaugural Lecture 10 March 2011 The Language of Mechanism

Professor Adrian Johnstone

Department of Computer Science, Royal Holloway, University of London

This lecture looks at the transition from visual to textual design in systems engineering, and gives an overview of new developments in language analysis that provide both greater flexibility and greater precision in the process of interpreting a design, be it software or a complete digital electronic system.

9 March 2011 Software Engineering the novel

Dr Joseph Reddington, Department of Computer Science, Royal Holloway, University of London

>Last year, a group of undergraduate students at Royal Holloway College used computer science techniques to collectively write a 60,000 word novel, "The Shadow Hours", in seven days. Later, a second group produced a more complex novel "The Delivery", of the same length, in five and a half days. Of particular interest was the development of the volunteers. They were involved in the full workflow of novel writing from inception and structure to proofing and choosing a cover illustration, and this greatly increased their investment and focus on the work. The environment allowed great progress in both their subject skills of structuring, characterisation, and word choice, but also the softer skills of teamwork, communication, and feedback. Project TooManyCooks won Royal Holloway's Team Teaching Prize this summer and has attracted various small amounts of commercialisation and outreach funding. This talk is particularly intended for students and staff with an interest computer science, education, or creative writing. Works by Dan Brown, Stephanie Meyer and JK Rowling pop up as examples - familiarity isn't required but the talk will definitely spoil some plot twists.

8 March 2011 Practical constructions for the efficient cryptographic enforcement of interval-based access control policies

Dr Jason Crampton, Information Security Group, Department of Mathematics, Royal Holloway, University of London

The enforcement of access control policies using cryptography has received considerable attention in recent years. Such enforcement schemes vary in the amount of storage and the number of key derivation steps that are required in the worst case. These parameters correspond, respectively, to the number of edges and the diameter of some graph that is used to represent the access control policy. In this talk we will consider a particular class of access control policies and the associated graphs. We then present a number of techniques for constructing a new graph that has a smaller diameter than the original graph but enforces the same access control policy.

4 March 2011 Conditional Generalisation Error Bounds using Inductive Conformal Predictors

Dr Zakria Hussain, Department of Computer Science, University College London

We use the ideas from the Inductive Conformal Predictor (ICP) framework to construct generalisation error bounds for classification. We show how we can fuse together d>1 different nonconformity measures to construct "p-values", and use sample compression bounds in order to bound the loss of falling into a specific region constructed by these d-dimensional p-values. Hence, after fixing the calibration set we can prove upper bounds on the loss of test points, once they fall into a predefined region. Given time, we will also show an approach to using nonconformity measures for model selection using, for instance, the SVM.

3 March 2011 What to make with DNA origami

Dr Shawn Douglas, Wyss Institute, Harvard University

The programmability of DNA makes it an attractive material for constructing intricate nanoscale shapes. The DNA origami method, in which a multiple-kilobase single-stranded "scaffold" is folded into a custom shape by interaction with hundreds of short oligonucleotide "staple" strands has proved to be extremely versatile for creating custom two- and three-dimensional shapes. The next challenge we aim to address is to apply our nanoscale building techniques to create tools and devices with demand-meeting applications. This talk will describe preliminary steps we've taken to develop a therapeutic nanostructure with selective targeting capability, as well as new CAD software tools to aid in future design efforts.

1 March 2011 Dual addition chains and their application to exponentiation in different directions

Dr Colin Walter, Information Security Group, Department of Mathematics, Royal Holloway, University of London

Location-aware addition chains are used to determine schemes for exponentiation in resource-constrained devices. A dual will be constructed for such addition chains. This reverses the order of an addition chain, and thereby provides a duality between left-to-right and right-to-left exponentiation schemes which preserves space requirements. The duality extends a previously known transformation between such schemes that preserves the time cost. All the usual algorithms for exponentiation have duals in the opposite direction. In particular, Yao's right-to-left scheme is derived as the dual of the normal table-based left-to-right m-ary scheme with an explicit pairing of registers between the two algorithms, and a new multi-base algorithm is obtained by taking the dual of the right-to-left division chain method.

1 February 2011 Query Learning of Some Context Free Languages

Dr Alex Clark, Department of Computer Science, Royal Holloway, University of London

Exact In many situations, we are interested in modeling sequence data: typically by constructing a representation of a formal language from a sequence of examples. For example in understanding how children might acquire their first language, we need to understand how richly structured representations can be constructed for languages that exhibit the sorts of structural dependencies typical of natural languages. Until quite recently, there were almost no algorithms for doing this except in the case where the languages are regular. In this talk, we will discuss recent algorithms which can model non-regular and indeed non context free languages. These approaches are based on distributional learning which models the relationship between contexts and substrings in a language. We will then present a simple algorithm for learning a class of context free grammars using Angluin's minimally adequate teacher model of exact query learning.

25 January 2011 Parameterization of graph separation problems: current results and further challenges

Dr Igor Razgon, University of Leicester

Fixed-parameter computation is a methodology of coping with NP-hardness that, under certain conditions, allows to solve hard optimization problems both precisely and efficiency. According to this methodology, a problem is associated with a parameter $k$ and solved by an exponential time algorithm whose exponential part solely depends on the parameter while dependnency on the input size $n$ is uniformly polynomial, i.e. the degree of the polynomial is a constant, usually not greater than 3. When the parameter is small, we get a low-polynomial algorithm solving an NP-hard problem to optimality. Problems that can be solved in this way are called fixed-parameter tractable (FPT). The parameter choice depends on the particular application and can be, for example, maximum or minimum allowed size of the output or some structural measure of the input, e.g. treewidth.

This talk is related to parameterization of graph separation problems where the task is to find the smallest set of vertices separating some predefined pairs of terminals, possibly under additional constraints. In the first part of this talk, I will informally define a fixed-parameter algorithm and describe two fixed-parameter algorithms for the Vertex Cover problem. In the second part of the talk I will describe an algorithm for solving the multiway cut problem and show how the main idea of this algorithm has helped to show the fixed-parameter tractability of such challenging graph separation problems as Directed Feedback Vertex Set, minimum 2CNF deletion, and Multicut. In the final part of the talk I will outline the current challenges related to the parameterization of graph separation problems. The main challenge is kernelization of these problems, i.e. designing a polynomial-time procedure that squeezes the initial instance of the problem so that its size polynomially depends on the parameter. Such procedure being applied at the preprocessing stage can significantly enhance any algorithm solving the given problem and hence understanding kernelizability of parameterized problems is a research direction of a great practical importance.

The talk is self-contained and assumes no prior familiarity with the area.

21 January 2011 (Almost) 20/20 Foresight - A Predicted Model for Advanced Persistent Threats in Industrial Process Control Systems

Richard McEvoy, Information Security Group, Department of Mathematics, Royal Holloway, University of London PDF of seminar

Stuxnet and similar threats are a visibly new trend in Information Security threats, in particular, threats to the critical infrastructure of nation-states. But the approach and techniques utilised -- including probable attack motivations -- have been predicted by researchers working in intrusion detection over the past decade. This seminar presents a predicted (and predictive) model for advanced persistent threats in SCADA and DCS (aka process control) networks and shows how closely Stuxnet fits to the predicted pattern. Further aspects of the model are explored. In addition, the seminar also touches on research efforts to detect, prevent and intervene in such attacks.

18 January 2011 $k$-radius sequences

Professor Simon Blackburn, Department of Mathematics, Royal Holloway, University of London

This talk is based on the following papers:
S.R. Blackburn, 'The existence of k-radius sequences'. See http://arxiv.org/abs/1101.1172
S.R. Blackburn and J.F. McKee, 'Constructing k-radius sequences', Mathematics of Computation, to appear. See http://arxiv.org/abs/1006.5812
Joint work with James McKee.

Given a collection of estimators, the problem of linear, convex or model selection type aggregation consists in constructing a new estimator, called the aggregate, which is nearly as good as the best among them (or nearly as good as their best linear or convex combination), with respect to a given risk criterion. This problem can be also considered with a given dictionary of functions instead of estimators. When the underlying model is sparse, which means that it is well approximated by a linear combination of a small number of functions in the dictionary, the aggregation techniques turn out to be very useful in taking advantage of sparsity. Aggregates are usually constructed by mixing the initial estimators or functions of the dictionary with data-dependent weights that can be computed is several possible ways. In this talk we consider aggregates with exponential weights. We show that they have some remarkable properties. In particular, they satisfy sharp oracle inequalities that allow one to treat in a unified way the problems of aggregation and of sparse estimation.
   
 
 
 

Comment on this page

Did you find the information you were looking for? Is there a broken link or content that needs updating? Let us know so we can improve the page.

Note: If you need further information or have a question that cannot be satisfied by this page, please call our switchboard on +44 (0)1784 434455.

This window will close when you submit your comment.

Add Your Feedback
Close