[MUSIC] Dear learners, welcome to the Computational Geometry course. Please let me introduce myself. My name is Kira Vyatkina. I am leading researcher at Center for Algorithmic Biotechnology of Institute of Translational Biomedicine at St. Petersburg State University. Prior to switching to algorithmic biology, I was working in the area of computational geometry. And beautiful field I will be happy to introduce you to. So let us start our journey. The first week of our course will be devoted to the classical problem of computational geometry called point occlusion in the polygon. Let us look around ourselves. We live in a three-dimensional world, and the objects that surround us are essentially geometric. Whatever we are doing, we need to manipulate them, or expect them, or avoid them. To proceed accordingly, we need to follow efficient strategies or algorithms. Thus on an everyday basis, we act as algorithm developers. No wonder, the development of this kind this kind of algorithms often also appears as competitive programming tasks. A scientific area that seeks to develop efficient algorithms for solving problems, the input and output of which are geometric objects, is called computational geometry. Basically, geometric objects comprise points, segments, circles, polygons, and so on. While real life problems are typically three-dimensional, it is beneficial to first solve an analogous problem in two-dimensional case. And then through the observation you made to address the three-dimensional task. As for the computations, the tasks are usually stated for the case. And therefore, in the frame of our course, we will restrict our attention to two-dimensional problems. Thus our working space will be the Euclidean plane. When proposing an algorithm we need to estimate its computational complexity. To this end, we should fix a computational model. Or we will assume that each memory cell of our computer can store area value. In reality, this is not the case. A computer memory cell can store only floating-point value. However, this assumption will allow us to come up with adequate estimates of time and space complexity of our methods. Furthermore, we need to indicate our operations will be considered elementary and can be accomplished in a unit time. Those operations will comprise, arithmetic operations, addition, subtraction, multiplication, division, and unary minus. Comparison of numbers, retrieval of element of an array by its index. And whenever necessary, trigonometric functions including sine, cosine, tangent, cotangent, and other analytical functions such as exponent function, logarithms, and so on. An example of an operation not considered elementary in the frame of our model is given by rounding to the nearest smaller or larger integer. The respective functions in programming languages are typically called floor and ceil. The letter is an abbreviation of ceiling. While there [INAUDIBLE] recall how those operations act. For example, for the positive value 2.39, rounding it to the nearest smaller integer will give us 2. And rounding it to the nearest larger integer will give us 3. And for the negative value, -2.39, rounding it to the nearest smaller integer will give us -3. And rounding it to the nearest larger integer will give us -2. Having developed an algorithm, we need to analyze a sufficiency. This amounts to estimating its time and space complexity. Time complexity referenced to the number of elementary operations performed through all the execution of the algorithm. And space complexity refers to the amount of storage used thereby. Time and space complexity of an algorithm are usually estimated in an asymptotic sense. This means that we provide an estimate for a function of the input size which describes the algorithm complexity. To this end, Big O-notation, big omega-notation, and big theta-notation are commonly used. Let us briefly recall the essence of those concepts. Let f and g be two positive-valued functions of natural n. We say that f(n) is at big O(g(n)). If there exists a positive real constant C, and an integer n0, such that for any n greater or equal to n0, the value of f(n) is at most that of g(n) times C. In other words, for large enough n, f(n) grows at least as fast as g(n). For example, 2n + 5 is O(n to the 4th). We say that f(n) is big omega (g(n)). If there exists a positive real constant, C, and an integer, n0, such that for any n greater or equal to n0, the value of f(n) is at least that of g(n) times C. In other words, for large enough n, f(n) grows at least as fast as g(n). For example, 2n cubed + five is big omega(n). We'll say that f(n) is big theta(g(n)). If there exist positive real constant in C1 and C2, and an integer n0, such that for any n greater or equal to n0, the value of f(n) is between that of g(n) times C1, and that O(g(n)) times C2. In other words, for large enough n, f(n) grows as fast as g(n). For example, 2n cubed + 5 is big theta of n cube. For the sake of completeness, let's also discuss small-o notation. We say that f(n) is small o(g(n). If for any positive real constant C there exists an integer 0. Such that for n E N in greater or equal to N0, the value of f(n) is smaller than that of g(n) times C. In other words, the ratio of f(n) and g(n) tends to 0 with increase in n. This means the g(n) grows much faster than f(n). For example, 2n cubed + 5 is small o(2 to the nth). In the frame of our course, we will always estimate the worst-case complexity of our algorithms. Such estimates are most common in computational geometry. Alternatively, one may estimate the average case complexity of an algorithm. This kind of estimates can be also very useful and informative, however, they often more difficult to obtain.