
I was a computer science professor at Unirio, in Rio de Janeiro, Brazil from 2009 to 2012. I was a Postdoc student at COPPE  UFRJ, advised by Celina Figueiredo. I got my PhD at the University of Maryland, College Park in 2007, advised by David Mount. Check my lattes cv for more details.
Fui professor de computação da Unirio, que fica na Praia Vermelha, na cidade do Rio de Janeiro, de 2009 à 2012. Fiz pósdoutorado na COPPE  UFRJ, sob orientação de Celina Figueiredo. Eu obtive meu doutorado na University of Maryland, College Park, em 2007, sob orientação de David Mount. Você pode encontrar mais informações no meu currículo lattes.
I'm especially interested in computational geometry, but I like working on all topics related to algorithms (data structures, randomization, approximation, graphs, bioinformatics, complexity theory...). You can find my papers listed below and also at google scholar citations or dblp. My teaching experience includes analysis of algorithms, formal languages, data structures, computational geometry, probability, and programming languages. During my free time, I go rock climbing.
Meu principal interesse é geometria computacional, mas gosto de trabalhar com todos os assuntos relacionados a algoritmos (estruturas de dados, randomização, aproximação, grafos, bioinformática, teoria da complexidade...). Você vai achar meus trabalhos listados abaixo e também no google scholar citations ou no dblp. Minha experiência didática inclui análise de algoritmos, linguagens formais, estruturas de dados, geometria computacional, probabilidade e linguagens de programação. Durante meu tempo livre, é provável que eu esteja escalando.
I hosted SoCG 2013 in Rio de Janeiro, together with Thomas Lewiner. For more information about previous SoCG events, check computationalgeometry.org.
Organizei o SoCG 2013 no Rio, em conjunto com o Thomas Lewiner. Para mais informações sobre eventos SoCG anteriores, olhe o site computationalgeometry.org.
Symposium on Computational GeometryJune 2013 
Numerous approximation algorithms for unit disk graphs have been proposed in the literature, exhibiting sharp tradeoffs between running times and approximation ratios. We propose a method to obtain lineartime approximation algorithms for unit disk graph problems. Our method yields lineartime (4+ε)approximations to the maximumweight independent set and the minimum dominating set, as well as a lineartime approximation scheme for the minimum vertex cover, improving upon all known linear or nearlineartime algorithms for these problems.
We introduce a method to decide whether a graph G admits a realization on the plane in which two vertices lie within unitary distance from one another exactly if they are neighbors in G. Such graphs are called unit disk graphs, and their recognition is a known NPhard problem. By iteratively discretizing the plane, we build a sequence of geometrically defined trigraphs—graphs with mandatory, forbidden and optional adjacencies—until either we find a realization of G or the interruption of such sequence certifies that no realization exists. Additionally, we consider the proposed method in the scope of the more general Distance Geometry Problem with Ranges, where arbitrary intervals of pairwise distances are allowed.
A unit disk graph is the intersection graph of n congruent disks in the plane. Dominating sets in unit disk graphs are widely studied due to their application in wireless adhoc networks. Because the minimum dominating set problem for unit disk graphs is NPhard, numerous approximation algorithms have been proposed in the literature, including some PTASs. However, since the proposal of a lineartime 5approximation algorithm in 1995, the lack of efficient algorithms attaining better approximation factors has aroused attention. We introduce a lineartime O(n+m) approximation algorithm that takes the usual adjacency representation of the graph as input and outputs a 44/9approximation. This approximation factor is also attained by a second algorithm, which takes the geometric representation of the graph as input and runs in O(n log n) time regardless of the number of edges. Additionally, we propose a 43/9approximation which can be obtained in O(n^{2} m) time given only the graph's adjacency representation. It is noteworthy that the dominating sets obtained by our algorithms are also independent sets.
A unit disk graph is the intersection graph of n congruent disks in the plane. Dominating sets in unit disk graphs are widely studied due to their application in wireless adhoc networks. Since the minimum dominating set problem for unit disk graphs is NPhard, several approximation algorithms with different merits have been proposed in the literature. On one extreme, there is a linear time 5approximation algorithm. On another extreme, there are two PTAS whose running times are polynomials of very high degree. We introduce a linear time approximation algorithm that takes the usual adjacency representation of the graph as input and attains a 44/9 approximation factor. This approximation factor is also attained by a second algorithm we present, which takes the geometric representation of the graph as input and runs in O(n log n) time, regardless of the number of edges. The analysis of the approximation factor of the algorithms, both of which are based on local improvements, exploits an assortment of results from discrete geometry to prove that certain graphs cannot be unit disk graphs. It is noteworthy that the dominating sets obtained by our algorithms are also independent sets.
Perfect matchings and maximum weight matchings are two fundamental combinatorial structures. We consider the ratio between the maximum weight of a perfect matching and the maximum weight of a general matching. Motivated by the application in triangle meshes, where we seek to convert a triangulation into a quadrangulation by merging pairs of adjacent triangles, we focus on bridgeless cubic graphs. First, we characterize graphs that attain the extreme ratios. Second, we present a lower bound for all bridgeless cubic graphs. Finally, we present upper bounds for subclasses of bridgeless cubic graphs that are relevant to the application, such as planar, hamiltonian, and bipartite. Most of the bounds are tight, but for bipartite graphs an intriguing gap is left as open problem.
@inproceedings{polytopeapx_socg, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM Symposium on Computational Geometry (SoCG)}, title = {Optimal AreaSensitive Bounds for Polytope Approximation}, pages = {363372}, doi = {10.1145/2261250.2261305}, url = {http://www.uniriotec.br/~fonseca/polytopeapx_socg.pdf}, year = {2012}, }
Approximating convex bodies is a fundamental question in geometry and has applications to a wide variety of optimization problems. Given a convex body K in R^{d} for fixed d, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error ε. The best known uniform bound, due to Dudley (1974), shows that O((diam(K)/ε)^{(d1)/2}) facets suffice. While this bound is optimal in the case of a Euclidean ball, it is far from optimal for skinny convex bodies. We show that, under the assumption that the width of the body in any direction is at least ε, it is possible to approximate a convex body using O((area(K)/ε^{d1})^{1/2}) facets. This bound is never worse than the previous bound and may be significantly better for skinny bodies. This bound is provably optimal in the worst case and improves upon our earlier result (which appeared in SODA 2012). Our improved bound arises from a novel approach to sampling points on the boundary of a convex body in order to stab all (dual) caps of a given width. This approach involves the application of an elegant concept from the theory of convex bodies, called Macbeath regions. While Macbeath regions are defined in terms of volume considerations, we show that by applying them to both the original body and its dual, and then combining this with known bounds on the Mahler volume, it is possible to achieve the desired widthbased sampling.
@inproceedings{mahler_soda, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACMSIAM Symposium on Discrete Algorithms (SODA)}, title = {Polytope Approximation and the Mahler Volume}, pages = {2942}, url = {http://www.uniriotec.br/~fonseca/mahler_soda.pdf}, year = {2012}, }
The problem of approximating convex bodies by polytopes is an important and well studied problem. Given a convex body K in R^{d}, the objective is to minimize the number of vertices (alternatively, the number of facets) of an approximating polytope for a given Hausdorff error ε. Dudley (1974) and Bronshteyn and Ivanov (1976) show that in spaces of fixed dimension, O((diam(K)/ε)^{(d1)/2}) vertices (alt., facets) suffice. In our first result, under the assumption that the width of the body is at least ε, we strengthen the above bound to Õ(√area(K)/ε^{(d1)/2}). This is never worse than the previous bound (by more than logarithmic factors) and may be significantly better for skinny bodies. Our analysis exploits an interesting analogy with a classical concept from the theory of convexity, called the Mahler volume. This is a dimensionless quantity that involves the product of the volumes of a convex body and its polar dual. In our second result, we apply the same machinery to improve upon the best known bounds for answering εapproximate polytope membership queries. Given a convex polytope P defined as the intersection of halfspaces, such a query determines whether a query point q lies inside or outside P, but may return either answer if q's distance from P's boundary is at most ε. We show that, without increasing storage, it is possible to dramatically reduce the best known search times for εapproximate polytope membership. This further implies improvements to the best known search times for approximate nearest neighbor searching in spaces of fixed dimension.
@inproceedings{polytope_conf, doi = {10.1145/1993636.1993713}, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {ACM Symposium on Theory of Computing (STOC)}, title = {Approximate Polytope Membership Queries}, pages = {579586}, url = {http://www.uniriotec.br/~fonseca/polytope_conf.pdf}, year = {2011}, }
We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in R^{d}, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an allowed error ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, the latter yields constant query time, and a spacetime tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε^{(d1)/2}) to O(1/ε^{(d1)/4}). Our approach is based on a very simple construction algorithm, whose analysis is surprisingly nontrivial. Both lower bounds and upper bounds on the performance of the algorithm are presented. To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. Remarkably, we show that our tradeoff provides significant improvements to the best known spacetime tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.
@article{flats, author = {da Fonseca, Guilherme D.}, title = {Fitting Flats to Points with Outliers}, doi = {10.1142/S0218195911003809}, journal = {International Journal of Computational Geometry and Applications}, number = {5}, volume = {21}, pages = {559569}, url = {http://www.uniriotec.br/~fonseca/flats_journal.pdf}, year = {2011}, }
Determining the best shape to fit a set of points is a fundamental problem in many areas of computer science. We present an algorithm to approximate the kflat that best fits a set of n points with n  m outliers. This problem generalizes the smallest menclosing ball, infinite cylinder, and slab. Our algorithm gives an arbitrary constant factor approximation in O(n^{k+2}/m) time, regardless of the dimension of the point set. While our upper bound nearly matches the lower bound, the algorithm may not be feasible for large values of k. Fortunately, for some practical sets of inliers, we reduce the running time to O(n^{k+2}/m^{k+1}), which is linear when m = Ω(n).
@article{pgrid, author = {de S\'{a}, Vin\'{i}cius G. P. and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and Machado, Raphael}, doi = {10.1016/j.tcs.2011.01.018}, journal = {Theoretical Computer Science}, title = {Complexity Dichotomy on Partial Grid Recognition}, number = {22}, volume = {412}, pages = {23702379}, url = {http://www.uniriotec.br/~fonseca/pgrid.pdf}, year = {2011}, }
Deciding whether a graph can be embedded in a grid using only unitlength edges is NPcomplete, even when restricted to binary trees. However, it is not difficult to devise a number of graph classes for which the problem is polynomial, even trivial. A natural step, outstanding thus far, was to provide a broad classiﬁcation of graphs that make for polynomial or NPcomplete instances. We provide such a classiﬁcation based on the set of allowed vertex degrees in the input graphs, yielding a full dichotomy on the complexity of the problem. As byproducts, the previous NPcompleteness result for binary trees was strengthened to strictly binary trees, and the threedimensional version of the problem was for the ﬁrst time proven to be NPcomplete. Our results were made possible by introducing the concepts of consistent orientations and robust gadgets, and by showing how the former allows NPcompleteness proofs by local replacement even in the absence of the latter.
@inproceedings{proximity, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {European Symposium on Algorithms (ESA)}, series = {Lecture Notes in Computer Science}, volume = {6346}, pages = {374385}, title = {A Unified Approach to Approximate Proximity Searching}, url = {http://www.uniriotec.br/~fonseca/proximity.pdf}, doi = {10.1007/9783642157752_32}, year = {2010}, }
The inability to answer proximity queries efficiently for spaces of dimension d > 2 has led to the study of approximation to proximity problems. Several techniques have been proposed to address different approximate proximity problems. In this paper, we present a new and unified approach to proximity searching, which provides efficient solutions for several problems: spherical range queries, idempotent spherical range queries, spherical emptiness queries, and nearest neighbor queries. In contrast to previous data structures, our approach is simple and easy to analyze, providing a clear picture of how to exploit the particular characteristics of each of these problems. As applications of our approach, we provide simple and practical data structures that match the best previous results up to logarithmic factors, as well as advanced data structures that improve over the best previous results for all aforementioned proximity problems.
@inproceedings{vlsi_isco, author = {de S\'{a}, Vin\'{i}cius G. P. and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and Machado, Raphael}, booktitle = {International Symposium on Combinatorial Optimization}, series = {Electronic Notes in Discrete Mathematics}, title = {Complexity Dichotomy on DegreeCconstrained VLSI Layouts with UnitLength Edges}, volume = {36}, pages = {391398}, url = {http://www.uniriotec.br/~fonseca/vlsi_isco.pdf}, doi = {10.1016/j.endm.2010.05.050}, year = {2010}, }
Deciding whether an arbitrary graph admits a VLSI layout with unitlength edges is NPcomplete, even when restricted to binary trees. However, for certain graphs, the problem is polynomial or even trivial. A natural step, outstanding thus far, was to provide a broader classiﬁcation of graphs that make for polynomial or NPcomplete instances. We provide such a classiﬁcation based on the set of vertex degrees in the input graphs, yielding a comprehensive dichotomy on the complexity of the problem, with and without the restriction to trees.
@article{absolute, author = {da Fonseca, Guilherme D. and Mount, David M.}, doi = {10.1016/j.comgeo.2008.09.009}, issn = {09257721}, journal = {Computational Geometry}, number = {4}, volume = {43}, pages = {434444}, title = {Approximate range searching: The absolute model}, url = {http://www.uniriotec.br/~fonseca/ARSCGTA.pdf}, year = {2010}, }
Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε diam(R) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axisaligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation.
@article{enclosing, author = {de Figueiredo, Celina M. H. and da Fonseca, Guilherme D.}, doi = {10.1016/j.ipl.2009.09.001}, issn = {00200190}, journal = {Information Processing Letters}, number = {2122}, pages = {12161221}, title = {Enclosing weighted points with an almostunit ball}, url = {http://www.uniriotec.br/~fonseca/enclosingIPL.pdf}, volume = {109}, year = {2009}, }
Given a set of n points with positive real weights in ddimensional space, we consider an approximation to the problem of placing a unit ball, in order to maximize the sum of the weights of the points enclosed by the ball. Given an approximation parameter ε < 1, we present an O(n/ε^{d1}) expected time algorithm that determines a ball of radius 1 + ε enclosing a weight at least as large as the weight of the optimal unit ball. This is the first approximate algorithm for the weighted version of the problem in ddimensional space. We also present a matching lower bound for a certain class of algorithms for the problem.
@article{odd, author = {Bueno, Let\'{\i}cia R. and Faria, Luerbio and de Figueiredo, Celina M. H. and da Fonseca, Guilherme D.}, journal = {Applicable Analysis and Discrete Mathematics}, number = {2}, pages = {386394}, title = {Hamiltonian Paths in Odd Graphs}, url = {http://pefmath.etf.rs/accepted/819Figueiredonv.pdf}, doi = {10.2298/AADM0902386B}, volume = {3}, year = {2009}, }
Lovász conjectured that every connected vertextransitive graph has a Hamiltonian path. The odd graphs O_{k} form a wellstudied family of connected, kregular, vertextransitive graphs. It was previously known that O_{k} has Hamiltonian paths for k ≤ 14. A direct computation of Hamiltonian paths in O_{k} is not feasible for large values of k, because O_{k} has vertices and k/2 edges. We show that O_{k} has Hamiltonian paths for 15 ≤ k ≤ 18. We do so without running any heuristics. Instead, we use existing results on the middle levels problem, therefore relating these two fundamental problems. We show that further improved results for the middle levels problem can be used to find Hamiltonian paths in O_{k} for larger values of k.
@inproceedings{tradeoffssibgrapi, author = {Arya, Sunil and da Fonseca, Guilherme D. and Mount, David M.}, booktitle = {21st Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI '08). }, doi = {10.1109/SIBGRAPI.2008.24}, pages = {237244}, title = {Tradeoffs in Approximate Range Searching Made Simpler}, url = {http://www.uniriotec.br/~fonseca/tradeoffssibgrapi.pdf}, year = {2008}, }
Range searching is a fundamental problem in computational geometry. The problem involves preprocessing a set of n points in ddimensional space into a data structure, so that it is possible to determine the subset of points lying within a given query range. In approximate range searching, a parameter ε > 0 is given, and for a given query range R the points lying within distance ε diam(R) of the range's boundary may be counted or not. In this paper we present three results related to the issue of tradeoffs in approximate range searching. First, we introduce the range sketching problem. Next, we present a spacetime tradeoff for smooth convex ranges, which generalize spherical ranges. Finally, we show how to modify the previous data structure to obtain a spacetime tradeoff for simplex ranges. In contrast to existing results, which are based on relatively complex data structures, all three of our results are based on simple, practical data structures.
@inproceedings{absolutewads, author = {da Fonseca, Guilherme D.}, booktitle = {Algorithms and Data Structures (WADS)}, chapter = {2}, doi = {10.1007/9783540739517\_2}, issn = {03029743}, journal = {Algorithms and Data Structures}, pages = {214}, series = {Lecture Notes in Computer Science}, title = {Approximate Range Searching: The Absolute Model}, url = {http://www.uniriotec.br/~fonseca/ARSWADS.pdf}, volume = {4619}, year = {2007}, }
Range searching is a well known problem in the area of geometric data structures. We consider this problem in the context of approximation, where an approximation parameter ε > 0 is provided. Most prior work on this problem has focused on the case of relative errors, where each range shape R is bounded, and points within distance ε diam(R) of the range's boundary may or may not be included. We consider a different approximation model, called the absolute model, in which points within distance ε of the range's boundary may or may not be included, regardless of the diameter of the range. We consider range spaces consisting of halfspaces, Euclidean balls, simplices, axisaligned rectangles, and general convex bodies. We consider a variety of problem formulations, including range searching under general commutative semigroups, idempotent semigroups, groups, and range emptiness. We show how idempotence can be used to improve not only approximate, but also exact halfspace range searching. Our data structures are much simpler than both their exact and relative model counterparts, and so are amenable to efficient implementation.
@article{hssp, author = {de Figueiredo, Celina M. H. and da Fonseca, Guilherme D. and de S\'{a}, Vinicius G. P. and Spinrad, Jeremy}, booktitle = {Algorithmica}, doi = {10.1007/s0045300511982}, issn = {01784617}, journal = {Algorithmica}, number = {2}, pages = {149180}, title = {Algorithms for the Homogeneous Set Sandwich Problem}, url = {http://www.uniriotec.br/~fonseca/hsspalgorithmica.pdf}, volume = {46}, year = {2006}, }
A homogeneous set is a nontrivial module of a graph, i.e. a nonempty, nonunitary, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs G_{1}(V,E_{1}), G_{2}(V,E_{2}), the Homogeneous Set Sandwich Problem (HSSP) asks whether there exists a sandwich graph G_{S}(V,E_{S}), E_{1}⊆E_{S}⊆E_{2}, which has a homogeneous set. In 2001, Tang et al. published an allfast O(n^{2}Δ_{2}) algorithm which was recently proven wrong, so that the HSSP's known upper bound would have been reset thereafter at former O(n^{4}) determined by Cerioli et al. in 1998. We present, notwithstanding, new deterministic algorithms which have it established at O(n^{3} log m/n). We give as well two even faster O(n^{3}) randomized algorithms, whose simplicity might lend them didactic usefulness. We believe that, besides providing efficient easytoimplement procedures to solve it, the study of these new approaches allows a fairly thorough understanding of the problem.
@inproceedings{hsspwea, author = {de Figueiredo, Celina M. and da Fonseca, Guilherme D. and de S\'{a}, Vin\'{\i}cius G. and Spinrad, Jeremy}, booktitle = {Experimental and Efficient Algorithms}, pages = {243252}, title = {Faster Deterministic and Randomized Algorithms on the Homogeneous Set Sandwich Problem}, url = {http://www.uniriotec.br/~fonseca/hsspmcd.pdf}, year = {2004} }
A homogeneous set is a nontrivial, proper subset of a graph's vertices such that all its elements present exactly the same outer neighborhood. Given two graphs, G_{1}(V,E_{1}), G_{2}(V,E_{2}), we consider the problem of finding a sandwich graph G_{S}(V,E_{S}), with E_{1}⊆E_{S}⊆E_{2}, which contains a homogeneous set, in case such a graph exists. This is called the Homogeneous Set Sandwich Problem (HSSP). We give an O(n^{3.5}) deterministic algorithm, which updates the known upper bounds for this problem, and an O(n^{3}) Monte Carlo algorithm as well. Both algorithms, which share the same underlying idea, are quite easy to be implemented on the computer.
@article{hanger, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and Carvalho, Paulo C. P.}, doi = {10.1016/j.ipl.2003.10.010}, issn = {00200190}, journal = {Information Processing Letters}, number = {3}, pages = {151157}, title = {Kinetic hanger}, url = {http://www.uniriotec.br/~fonseca/hanger.pdf}, volume = {89}, year = {2004}, }
A kinetic priority queue is a kinetic data structure which determines the largest element in a collection of continuously changing numbers subject to insertions and deletions. Due to its importance, many different constructions have been suggested in the literature, each with its pros and cons. We propose a simple construction that takes advantage of randomization to achieve optimal locality and the same time complexity as most other efficient structures.
@article{sm_restricted, author = {Dias, V\^{a}nia M. F. and da Fonseca, Guilherme D. and de Figueiredo, Celina M. H. and Szwarcfiter, Jayme L.}, doi = {10.1016/S03043975(03)003190}, issn = {03043975}, journal = {Theoretical Computer Science}, number = {13}, pages = {391405}, title = {The stable marriage problem with restricted pairs}, url = {http://www.uniriotec.br/~fonseca/smtcs.ps}, volume = {306}, year = {2003}, }
A stable matching is a complete matching of men and women such that no man and woman who are not partners both prefer each other to their actual partners under the matching. In an instance of the stable marriage problem, each of the n men and n women ranks the members of the opposite sex in order of preference. It is well known that at least one stable matching exists for every stable marriage problem instance. We consider extensions of the stable marriage problem obtained by forcing and by forbidding sets of pairs. We present a characterization for the existence of a solution for the stable marriage with forced and forbidden pairs problem. In addition, we describe a reduction of the stable marriage with forced and forbidden pairs problem to the stable marriage with forbidden pairs problem. Finally, we also present algorithms for finding a stable matching, all stable pairs and all stable matchings for this extension. The complexities of the proposed algorithms are the same as the best known algorithms for the unrestricted version of the problem.
@article{kinetic_heap, author = {da Fonseca, Guilherme D. and de Figueiredo, Celina M. H.}, doi = {10.1016/S00200190(02)003666}, issn = {00200190}, journal = {Information Processing Letters}, number = {3}, pages = {165169}, title = {Kinetic heapordered trees: Tight analysis and improved algorithms}, url = {http://www.uniriotec.br/~fonseca/kh.pdf}, volume = {85}, year = {2003}, }
The most natural kinetic data structure for maintaining the maximum of a collection of continuously changing numbers is the kinetic heap. Basch, Guibas, and Ramkumar proved that the maximum number of events processed by a kinetic heap with n numbers changing as linear functions of time is O(n log^{2} n) and Ω(n log n). We prove that this number is actually Θ(n log n). In the kinetic heap, a linear number of events are stored in a priority queue, consequently, it takes O(log n) time to determine the next event at each iteration. We also present a modified version of the kinetic heap that processes O(n log n / log log n) events, with the same O(log n) time complexity to determine the next event.