Browser does not support script.

Browser does not support script.

Browser does not support script.

Browser does not support script.

Browser does not support script.

**Robert Crowston**, Department of Computer Science, Royal Holloway, University of London

**Professor Marcello D'Agostino**, Department of Human Sciences, University of Ferrara, Italy

Logic is informationally trivial and, yet, computationally hard.This is one of the most baffling paradoxes arising from the traditional account of logical consequence. Triviality stems from the widely accepted characterization of deductive inference as *non-ampliative*: the information carried by the conclusion is (in some sense) contained in the information carried by the premises. Computational hardness stems from the well-known results showing that most interesting logics are undecidable or, even when decidable, very likely to be intractable. This situation leads to the so-called *problem of logical omniscience*: if logic is informationally trivial, then a rational agent should always be aware that a certain sentence is a logical consequence of the data. However, this is clearly not the case when the logic in question lacks a feasible decision procedure.

**Dr Wouter Koolen**, Department of Computer Science, Royal Holloway, University of London

We propose a new way to build a combined list from K base lists, each containing N items. A combined list consists of top segments of various sizes from each base list so that the total size of all top segments equals N. A sequence of item requests is processed and the goal is to minimize the total number of misses. That is, we seek to build a combined list that contains all the frequently requested items. We first consider the special case of disjoint base lists. There, we design an efficient algorithm that computes the best combined list for a given sequence of requests. In addition, we develop a randomized online algorithm whose expected number of misses is close to that of the best combined list chosen in hindsight. We prove lower bounds that show that the expected number of misses of our randomized algorithm is close to the optimum.

In the presence of duplicate items, we show that computing the best combined list is NP-hard. We show that our algorithms still apply to a linearised notion of loss in this case.

We expect that this new way of aggregating lists will find many ranking applications.

**Prof Alexander Balinsky**, Cardiff School of Mathematics, Cardiff University

In this talk we present a connection between semi-supervised learning and compressive sampling. We show that sparsity and compressibility of the learning function can be obtained from heavy-tailed distributions of filter responses or coefficients in spectral decomposition. In many cases the NP-hard problems of finding sparsest solutions can be replaced by l1-problems from convex optimization theory, which provide effective tools for semi-supervised learning. We present several conjectures and examples. As an example of the application of our results, we consider the colorization problem of natural images.

This is joint work with Helen Balinsky and Nassir Mohammad from Hewlett-Packard Labs.

**Dr Yuri Kalnishkan**, Department of Computer Science, Royal Holloway, University of London

Ridge regression is a popular machine learning technique with widespread applications. In the talk I will discuss ridge regression in the contexts of functional analysis (reproducing kernel Hilbert spaces) and the theory of random fields (Gaussian covariances) and derive an identity linking the quadratic losses of kernel ridge regression in batch and on-line frameworks. Some corollaries of the identity providing upper bounds on the cumulative loss of on-line ridge regression will also be discussed. An alternative proof of the identity motivated by the aggregating algorithm will be presented.

The results in the talk appeared in recent papers "An Identity for Kernel Ridge Regression" by F.Zhdanov and Y.Kalnishkan (Proceedings of ALT 2010) and "Competing with Gaussian linear experts" by F.Zhdanov and V.Vovk (abs/0910.4683).

**Dr Matthias Mnich**, International Computer Science Institute, Berkeley

We algorithmize the recent structural characterization for claw-free graphs by Chudnovsky and Seymour. Building on this result, we show that Dominating Set on claw-free graphs is (i) fixed-parameter tractable and (ii) even possesses a polynomial kernel. To complement these results, we establish that Dominating Set is not fixed-parameter tractable on the slightly larger class of graphs that exclude K_{1,4} as an induced subgraph. Our results provide a dichotomy for Dominating Set in K_{1,L}-free graphs and show that the problem is fixed-parameter tractable if and only if L â ¤ 3. Finally, we show that our algorithmization can also be used to show that the related Connected Dominating Set problem is fixed-parameter tractable on claw-free graphs.

(Joint work with Danny Hermelin, Erik Jan van Leeuwen and Gerhard Woeginger)

**Prof Ming Chen**, Zhejiang University, People's Republic of China

**Prof Michael Fellows**,Charles Darwin University, Australia

The talk will motivate the research program in parameterized complexity and algorithmics that has loosely become known as 'parameter ecology';. What is generally going on is that as the field advances, we are becoming ever-more adept at rich parameterizations of NP-hard problems. There are good practical reasons to explore such things. The mathematical techniques required are often fresh and interesting. Some recent results that are leading this area forward will be surveyed.

**Prof Rajeev Raman**, University of Leicester

Modern applications involving processing textual and semi-structured data need to do fairly complex computations on data held in the main memory of a computer, which is a limited resource in many contexts. While data compression can reduce the size of data, it presents significant obstacles to operating on the compressed data, and new data structuring techniques need to be developed to overcome these obstacles. After an introduction to the issues, we will focus on one particular recent result.

Let S be a string of length N compressed into a context- free grammar S of size n. We present a representation of S achieving O(log N) random access time, and O(n) construction time and space on the RAM. We are able to generalize our results to navigation and other operations on grammar-compressed trees. To achieve these bounds, we introduce several new techniques and data structures of independent interest, including a predecessor data structure, two "biased" weighted ancestor data structures, and a compact representation of heavy-paths in grammars.

[Although this will be summarizing some other papers, the main result above is joint with Bille, Landau, Satti, Sadakane and Weimann and some parts of this paper were presented at the ACM-SIAM SODA 2011.]**
**Rajeev Raman, http://www.cs.le.ac.uk/~rraman

Department of Computer Science, Royal Holloway, University of London

This lecture looks at the transition from visual to textual design in systems engineering, and gives an overview of new developments in language analysis that provide both greater flexibility and greater precision in the process of interpreting a design, be it software or a complete digital electronic system.

**Dr Joseph Reddington**, Department of Computer Science, Royal Holloway, University of London

>Last year, a group of undergraduate students at Royal Holloway College used computer science techniques to collectively write a 60,000 word novel, "The Shadow Hours", in seven days. Later, a second group produced a more complex novel "The Delivery", of the same length, in five and a half days. Of particular interest was the development of the volunteers. They were involved in the full workflow of novel writing from inception and structure to proofing and choosing a cover illustration, and this greatly increased their investment and focus on the work. The environment allowed great progress in both their subject skills of structuring, characterisation, and word choice, but also the softer skills of teamwork, communication, and feedback. Project TooManyCooks won Royal Holloway's Team Teaching Prize this summer and has attracted various small amounts of commercialisation and outreach funding. This talk is particularly intended for students and staff with an interest computer science, education, or creative writing. Works by Dan Brown, Stephanie Meyer and JK Rowling pop up as examples - familiarity isn't required but the talk will definitely spoil some plot twists.

**Dr Jason Crampton**, Information Security Group, Department of Mathematics, Royal Holloway, University of London

The enforcement of access control policies using cryptography has received considerable attention in recent years. Such enforcement schemes vary in the amount of storage and the number of key derivation steps that are required in the worst case. These parameters correspond, respectively, to the number of edges and the diameter of some graph that is used to represent the access control policy. In this talk we will consider a particular class of access control policies and the associated graphs. We then present a number of techniques for constructing a new graph that has a smaller diameter than the original graph but enforces the same access control policy.

**Dr Zakria Hussain**, Department of Computer Science, University College London

We use the ideas from the Inductive Conformal Predictor (ICP) framework to construct generalisation error bounds for classification. We show how we can fuse together d>1 different nonconformity measures to construct "p-values", and use sample compression bounds in order to bound the loss of falling into a specific region constructed by these d-dimensional p-values. Hence, after fixing the calibration set we can prove upper bounds on the loss of test points, once they fall into a predefined region. Given time, we will also show an approach to using nonconformity measures for model selection using, for instance, the SVM.

**Dr Shawn Douglas**, Wyss Institute, Harvard University

The programmability of DNA makes it an attractive material for constructing intricate nanoscale shapes. The DNA origami method, in which a multiple-kilobase single-stranded "scaffold" is folded into a custom shape by interaction with hundreds of short oligonucleotide "staple" strands has proved to be extremely versatile for creating custom two- and three-dimensional shapes. The next challenge we aim to address is to apply our nanoscale building techniques to create tools and devices with demand-meeting applications. This talk will describe preliminary steps we've taken to develop a therapeutic nanostructure with selective targeting capability, as well as new CAD software tools to aid in future design efforts.

**Dr Colin Walter**, Information Security Group, Department of Mathematics, Royal Holloway, University of London

** **Location-aware addition chains are used to determine schemes for exponentiation in resource-constrained devices. A dual will be constructed for such addition chains. This reverses the order of an addition chain, and thereby provides a duality between left-to-right and right-to-left exponentiation schemes which preserves space requirements. The duality extends a previously known transformation between such schemes that preserves the time cost. All the usual algorithms for exponentiation have duals in the opposite direction. In particular, Yao's right-to-left scheme is derived as the dual of the normal table-based left-to-right m-ary scheme with an explicit pairing of registers between the two algorithms, and a new multi-base algorithm is obtained by taking the dual of the right-to-left division chain method.

** Dr Alex Clark, **Department of Computer Science, Royal Holloway, University of London

** **Exact In many situations, we are interested in modeling sequence data: typically by constructing a representation of a formal language from a sequence of examples. For example in understanding how children might acquire their first language, we need to understand how richly structured representations can be constructed for languages that exhibit the sorts of structural dependencies typical of natural languages. Until quite recently, there were almost no algorithms for doing this except in the case where the languages are regular. In this talk, we will discuss recent algorithms which can model non-regular and indeed non context free languages. These approaches are based on distributional learning which models the relationship between contexts and substrings in a language. We will then present a simple algorithm for learning a class of context free grammars using Angluin's minimally adequate teacher model of exact query learning.

** Dr Igor Razgon, **University of Leicester

** **Fixed-parameter computation is a methodology of coping with NP-hardness that, under certain conditions, allows to solve hard optimization problems both precisely and efficiency. According to this methodology, a problem is associated with a parameter $k$ and solved by an exponential time algorithm whose exponential part solely depends on the parameter while dependnency on the input size $n$ is uniformly polynomial, i.e. the degree of the polynomial is a constant, usually not greater than 3. When the parameter is small, we get a low-polynomial algorithm solving an NP-hard problem to optimality. Problems that can be solved in this way are called fixed-parameter tractable (FPT). The parameter choice depends on the particular application and can be, for example, maximum or minimum allowed size of the output or some structural measure of the input, e.g. treewidth.

This talk is related to parameterization of graph separation problems where the task is to find the smallest set of vertices separating some predefined pairs of terminals, possibly under additional constraints. In the first part of this talk, I will informally define a fixed-parameter algorithm and describe two fixed-parameter algorithms for the Vertex Cover problem. In the second part of the talk I will describe an algorithm for solving the multiway cut problem and show how the main idea of this algorithm has helped to show the fixed-parameter tractability of such challenging graph separation problems as Directed Feedback Vertex Set, minimum 2CNF deletion, and Multicut. In the final part of the talk I will outline the current challenges related to the parameterization of graph separation problems. The main challenge is kernelization of these problems, i.e. designing a polynomial-time procedure that squeezes the initial instance of the problem so that its size polynomially depends on the parameter. Such procedure being applied at the preprocessing stage can significantly enhance any algorithm solving the given problem and hence understanding kernelizability of parameterized problems is a research direction of a great practical importance.

The talk is self-contained and assumes no prior familiarity with the area.

** Richard McEvoy, **Information Security Group, Department of Mathematics, Royal Holloway, University of London PDF of seminar

** **Stuxnet and similar threats are a visibly new trend in Information Security threats, in particular, threats to the critical infrastructure of nation-states. But the approach and techniques utilised -- including probable attack motivations -- have been predicted by researchers working in intrusion detection over the past decade. This seminar presents a predicted (and predictive) model for advanced persistent threats in SCADA and DCS (aka process control) networks and shows how closely Stuxnet fits to the predicted pattern. Further aspects of the model are explored. In addition, the seminar also touches on research efforts to detect, prevent and intervene in such attacks.

** Professor Simon Blackburn, **Department of Mathematics, Royal Holloway, University of London

** **This talk is based on the following papers:

S.R. Blackburn, 'The existence of k-radius sequences'. See http://arxiv.org/abs/1101.1172

S.R. Blackburn and J.F. McKee, 'Constructing k-radius sequences', Mathematics of Computation, to appear. See http://arxiv.org/abs/1006.5812

Joint work with James McKee.

Browser does not support script.

Browser does not support script.

## Comment on this page

Did you find the information you were looking for? Is there a broken link or content that needs updating? Let us know so we can improve the page.

Note: If you need further information or have a question that cannot be satisfied by this page, please call our switchboard on +44 (0)1784 434455.

This window will close when you submit your comment.