Theory
https://csd.cmu.edu/
enComputer Science Speaking Skills Talk
https://csd.cmu.edu/calendar/speaking-skills-WILLIAMS-2023-12-12
<span>Computer Science Speaking Skills Talk</span>
Reddy Conference Room, Gates Hillman 4405
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Tue, 12/12/2023 - 14:30</span>
In Person
The M/M/k with Deterministic Setup Times
JALANI WILLIAMS
<p>Capacity management, whether it involves servers in a data center, human staff in a call center, or doctors in a hospital, is largely about balancing a resource-delay tradeoff. On the one hand, one would like to turn off servers when not in use (or send home staff that are idle) to save on resources. On the other hand, one wants to avoid the considerable setup time required to turn an off server back on. </p>
<p>In this talk, we describe recent work focused on understanding the delay component of this tradeoff. In particular, we discuss new, tight bounds on the average delay in the M/M/k with Deterministic setup times.</p>
<p><em>Presented in Partial Fulfillment of the CSD Speaking Skills Requirement</em></p>
<time datetime="2023-12-12T19:30:00Z">December 12, 2023 2:30pm</time><time datetime="2023-12-12T20:30:00Z">December 12, 2023 3:30pm</time>
https://jalaniw.github.io/
Ph.D. Student, Computer Science Department, Carnegie Mellon University
Speaking Skills
<a href="https://csd.cmu.edu/people/doctoral-student/jalani-williams" hreflang="en">Jalani Williams</a>
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Reddy Conference Room, Gates Hillman 4405
Tue, 12 Dec 2023 19:30:00 +0000Anonymous222332750 at https://csd.cmu.eduTheory Seminar
https://csd.cmu.edu/calendar/seminar-series-Theory-2023-12-08
<span>Theory Seminar</span>
Blelloch-Skees Conference Room, Gates Hillman 8115
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Fri, 12/08/2023 - 15:00</span>
In Person
Learning quantum Hamiltonians at any temperature in polynomial time
AINESH BAKSHI
<p>A quantum system of n interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system.</p>
<p>We resolve this question by giving the first polynomial-time algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm. No prior knowledge of quantum information required.</p>
<p><em>Based on joint work with Allen Liu, Ankur Moitra and Ewin Tang.</em></p>
<time datetime="2023-12-08T20:00:00Z">December 8, 2023 3:00pm</time><time datetime="2023-12-08T21:00:00Z">December 8, 2023 4:00pm</time>
http://aineshbakshi.com/
Postdoctoral Associate, Electrical Engineering and Computer Science, Department of Mathematics, Massachusetts Institute of Technology
http://theory.cs.cmu.edu/
<a href="mailto:odennell@cs.cmu.edu">odennell@cs.cmu.edu</a>
Seminar Series
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Blelloch-Skees Conference Room, Gates Hillman 8115
Fri, 08 Dec 2023 20:00:00 +0000Anonymous222332731 at https://csd.cmu.eduOperations Research Seminar
https://csd.cmu.edu/calendar/seminar-Operations-Research-2023-
<span>Operations Research Seminar</span>
Tepper 4242
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Fri, 12/08/2023 - 13:30</span>
In Person
Polyhedral Formulations for (Subset) Feedback Vertex Set
CHANDRA CHEKURI
<p>We consider feedback vertex set (FVS) in undirected graphs, and its generalization the subset feedback vertex set (SFVS) problem. In FVS the input is a vertex-weighted graph G=(V,E) and the goal is to remove a minimum weight susbet of vertices S such that the G-S has no cycles. In SFVS we are also given a subset T of the vertices called terminals, and the goal is to remove a minimum weight subset of vertices S such that G-S has no cycle containing a terminal. FVS is a well-known NP-Hard problem and admits a 2-approximation from several decades ago via the local-ratio method (Bafna et al, Becker and Geiger). Moreover 2 is the best approximation ratio one can hope for assuming the unique games conjecture. Chudak et al. developed an LP relaxation for FVS and interpreted the local-ratio algorithms as primal-dual algorithms with respect to this relaxation. However, their LP relaxation was not known to be solvable in polynomial time.</p>
<p>We develop a new LP relaxation that is poly-time solvable and has an integrality gap of 2. This LP relaxation is based on a connection to a relaxation for densest subgraph by Charikar. A few years ago Chekuri and Madan developed a poly-time solvable LP relaxation for the more general SFVS problem and they showed that it had an integrality gap of at most 13 via a rounding algorithm. They raised the question of whether their LP relaxation had an integrality gap of at most 2 for FVS (and also SFVS). We answer this question in the affirmative for FVS. Despite proving that these two LP relaxations (which come from different perspectives on the problem) have an integrality gap of at most 2 for FVS, we do not know a direct way to round the relaxations to achieve this bound. We conjecture an extreme point property for one of the LP relaxations that would allow one to obtain an iterated rounding 2-approximation algorithm, and provide evidence for this conjecture via a related result for the pseudo-forest deletion set problem.</p>
<p>The goal of the talk is to highlight the ideas and connections behind the several LP relaxations for these problems, and to point out several open problems.</p>
<p><em>Based on joint work with <a href="https://arxiv.org/abs/2303.12850" target="_blank">Karthik Chandrasekharan, Samuel Fiorini, Shubhang Kulkarni, and Stefan Weltge</a> , and an <a href="https://epubs.siam.org/doi/10.1137/1.9781611974331.ch58" target="_blank">older paper with Vivek Madan</a> </em></p>
<p>—</p>
<p><a href="http://chekuri.cs.illinois.edu/" target="_blank">Chandra Chekuri</a> is the Paul and Cynthia Saylor Professor in the Department of Computer Science at University of Illinois, Urbana-Champaign. He joined the university in 2006 after spending eight years at Lucent Bell Labs. Prior to that he received his PhD from Stanford University and an undergraduate degree from IIT Chennai. He is interested in the design and analysis of algorithms, combinatorial optimization, and theoretical computer science. He is happy about some of his contributions to (fast) approximation algorithms, graphs and networks, scheduling, and optimizing with submodular functions.</p>
<time datetime="2023-12-08T18:30:00Z">December 8, 2023 1:30pm</time><time datetime="2023-12-08T19:30:00Z">December 8, 2023 2:30pm</time>
http://chekuri.cs.illinois.edu/
Paul and Cynthia Saylor Professor, Algorithms / Theory Group, Department of Computer Science, University of Illinois, Urbana-Champaign
<a href="mailto:pconley@andrew.cmu.edu">pconley@andrew.cmu.edu</a>
Seminar
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
<a href="https://csd.cmu.edu/research/research-areas/algorithms-and-complexity" hreflang="en">Algorithms and Complexity</a>
Tepper 4242
Fri, 08 Dec 2023 18:30:00 +0000Anonymous222332676 at https://csd.cmu.eduAlgorithms, Combinatorics and Optimization Seminar
https://csd.cmu.edu/calendar/seminar-series-ACO-2023-12-07
<span>Algorithms, Combinatorics and Optimization Seminar</span>
Wean 8220
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Thu, 12/07/2023 - 15:30</span>
In Person
Optimal Scheduling of Elastic and Inelastic Jobs
BEN BERG
<p>A wide range of modern computer systems rely on parallelism to process jobs quickly. Unlike the jobs considered in most classical scheduling problems, parallelizable jobs can be completed more quickly when they are run on multiple servers or cores. However, not all jobs are perfectly parallelizable. Computing workloads are typically composed of a mixture of highly parallelizable elastic jobs and less parallelizable inelastic jobs. Given a fixed number of cores, it is not obvious how to best allocate cores across a stream of arriving elastic and inelastic jobs. We consider the problem of allocating cores to jobs in order to minimize the mean response time across jobs — the average time from when a job arrives to the system until it is complete.</p>
<p>To solve this problem, one must balance a tradeoff between prioritizing short jobs and deferring parallelizable work. Completing short jobs before long jobs is known to reduce the mean response time in many systems. However, when jobs are parallelizable, it is also important to keep some elastic jobs in the system to ensure that the available cores remain utilized. Hence, it can also be beneficial to prioritize longer inelastic jobs ahead of shorter elastic jobs. Using coupling arguments and Lyapunov drift arguments from queueing theory, we show how to optimally balance this tradeoff. Specifically, we show how the optimal core allocation policy depends on the number of cores in the system, the system load, and the distributions of elastic and inelastic job sizes.</p>
<time datetime="2023-12-07T20:30:00Z">December 7, 2023 3:30pm</time><time datetime="2023-12-07T21:30:00Z">December 7, 2023 4:30pm</time>
https://cs.unc.edu/person/benjamin-berg/
Assistant Professor, Computer Science Department, University of North Carolina at Chapel Hill,
https://aco.math.cmu.edu/abs-23-24/dec07.html
<a href="mailto:martap@andrew.cmu.edu">martap@andrew.cmu.edu</a>
Seminar Series
<a href="https://csd.cmu.edu/research/research-areas/algorithms-and-complexity" hreflang="en">Algorithms and Complexity</a>
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Wean 8220
Thu, 07 Dec 2023 20:30:00 +0000Anonymous222332674 at https://csd.cmu.eduTheory Lunch Seminar
https://csd.cmu.edu/calendar/seminar-series-Theory-Lunch-2023-12-06
<span>Theory Lunch Seminar</span>
Group Viewing Gates Hillman 8102 and Zoom
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Wed, 12/06/2023 - 12:00</span>
In Person and Virtual - ET
Adaptive Regret for Bandits Made Possible: Two Queries Suffice
QIUYI (RICHARD) ZHANG
<script id="MathJax-script" async="" src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script><script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script><script id="MathJax-script" async="" src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script><script type="text/x-mathjax-config">
<!--//--><![CDATA[// ><!--
MathJax.Hub.Config({
tex2jax: { inlineMath: [['$', '$']] }
});
//--><!]]>
</script><p>Fast changing states or volatile environments pose a significant challenge to online optimization, which needs to perform rapid adaptation under limited observation. In this paper, we give query and regret optimal bandit algorithms under the strict notion of strongly adaptive regret, which measures the maximum regret over any contiguous interval $I$.</p>
<p>Due to its worst-case nature, there is an almost-linear $\Omega(|I|^{1-\epsilon})$ regret lower bound, when only one query per round is allowed [Daniely el al, ICML 2015]. Surprisingly, with just two queries per round, we give Strongly Adaptive Bandit Learner (StABL) that achieves $\widetilde{O}(\sqrt{n|I|})$ adaptive regret for multi-armed bandits with $n$ arms. The bound is tight and cannot be improved in general.</p>
<p>Our algorithm leverages a multiplicative update scheme of varying stepsizes and a carefully chosen observation distribution to control the variance. Furthermore, we extend our results and provide optimal algorithms in the bandit convex optimization and dynamic regret settings. Finally, we empirically demonstrate the superior performance of our algorithms under volatile environments and for downstream tasks, such as algorithm selection for hyperparameter optimization. </p>
<p><em>Group Viewing and <a href="https://cmu.zoom.us/j/96386070966?pwd=SzJ5cFhROEhrTHR2K2J2N1Q1QlVSUT09" target="_blank">Zoom</a> Participation. See announcement.</em></p>
<time datetime="2023-12-06T17:00:00Z">December 6, 2023 12:00pm</time><time datetime="2023-12-06T18:00:00Z">December 6, 2023 1:00pm</time>
https://qiuyiz.github.io/
Google Deepmind
https://www.cs.cmu.edu/~theorylunch/
<a href="mailto:dhathcoc@andrew.cmu.edu">dhathcoc@andrew.cmu.edu</a>
Seminar Series
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Group Viewing Gates Hillman 8102 and Zoom
Wed, 06 Dec 2023 17:00:00 +0000Anonymous222332709 at https://csd.cmu.eduComputer Science Thesis Proposal
https://csd.cmu.edu/calendar/thesis-proposal-KACHAM-2023-12-05
<span>Computer Science Thesis Proposal</span>
Newell-Simon Hall 4305
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Tue, 12/05/2023 - 13:30</span>
In Person
On Efficient Sketching Algorithms
PRANEETH KACHAM
<p>Sketching has become a very effective tool to efficiently extract useful information from very large inputs which are presented to an algorithm in various forms such as a turnstile stream or when the data is partitioned arbitrarily among multiple servers. In this proposal, we describe the main paradigms of computation with large datasets and how we obtain new efficient algorithms for a large variety of problems using sketch-based techniques. <strong>Thesis Committee:</strong> David Woodruff (Chair) Pravesh Kothari Richard Peng Rasmus Pagh (University of Copenhagen)</p>
<time datetime="2023-12-05T18:30:00Z">December 5, 2023 1:30pm</time><time datetime="2023-12-05T20:00:00Z">December 5, 2023 3:00pm</time>
https://www.praneethkacham.com/
Ph.D. Student, Computer Science Department, Carnegie Mellon University
Thesis Proposal
<a href="https://csd.cmu.edu/people/doctoral-student/praneeth-kacham" hreflang="en">Praneeth Kacham</a>
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Newell-Simon Hall 4305
Tue, 05 Dec 2023 18:30:00 +0000Anonymous222332669 at https://csd.cmu.eduAlgorithms, Combinatorics and Optimization Seminar
https://csd.cmu.edu/calendar/seminar-series-ACO-2023-11-30
<span>Algorithms, Combinatorics and Optimization Seminar</span>
Wean 8220
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Thu, 11/30/2023 - 15:30</span>
In Person
On relaxations of “rank” for boolean matrices
KKAVE HOSSEINI
<p>In this talk I will discuss a few well-known complexity parameters for boolean matrices that are relaxations of rank (over the reals). These are approximate rank, sign-rank/dimension complexity, margin/discrepancy, gamma2 norm, and approximate-gamma2. The focus of this talk is to study the meta-question: "what is the relationship between these parameters?". It turns out that study of this meta-question connects many different areas and equivalent stories are to be told in learning theory, communication complexity, convex geometry, theory of dimensionality reduction, etc. I will try to answer some of these pairwise relations using different tools such as Fourier analysis, topology, and also ideas from discrete geometry.</p>
<p><em>Refreshments — 3:00 pm, Math Lounge / Wean 6220</em></p>
<time datetime="2023-11-30T20:30:00Z">November 30, 2023 3:30pm</time><time datetime="2023-11-30T21:30:00Z">November 30, 2023 4:30pm</time>
https://www.cs.rochester.edu/u/shossei2/
Assistant Professor, Department of Computer Science, University of Rochester
https://aco.math.cmu.edu/abs-23-24/nov30.html
<a href="mailto:martap@andrew.cmu.edu">martap@andrew.cmu.edu</a>
Seminar Series
<a href="https://csd.cmu.edu/research/research-areas/algorithms-and-complexity" hreflang="en">Algorithms and Complexity</a>
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Wean 8220
Thu, 30 Nov 2023 20:30:00 +0000Anonymous222332654 at https://csd.cmu.eduTheory Lunch Seminar
https://csd.cmu.edu/calendar/seminar-series-Theory-Lunch-2023-11-29
<span>Theory Lunch Seminar</span>
Group Viewing Gates Hillman 8102 and Zoom
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Wed, 11/29/2023 - 12:00</span>
In Person and Virtual - ET
Tame the Beast: Practical Theories for Responsible Modern AI Deployment
ZHUN DENG
<p>Modern digital systems powered by artificial intelligence (AI) have permeated various aspects of society, playing an instrumental role in many critical applications. This proliferation has accelerated recently, exemplified by the emergence of groundbreaking systems such as GPT-4 and MidJourney, giving rise to the concern that “no one, not even their creators, can understand, predict, or reliably control” these technologies. Indeed, the predictive performance is crucial to modern AI informed decision-making systems and is often measured by their accuracy on stylized benchmarks, however, it is crucial to ``know what is unknown": one must also assess the inherent uncertainty in the predictions so as to understand likely failure modes of decision-making and guarantee the transparency and consistency of AI systems.</p>
<p>In this talk, I will introduce our recent work on distribution-free uncertainty quantification to provide tight finite sample bounds for a rich class of statistical functionals of quantile functions and enable control of the dispersion of loss distribution, or the extent to which different members of a population experience unequal effects of algorithmic decisions. We provide multiple ways to provide tight and practically useful bounds in guiding modern AI deployment, including inverting test statistics and a novel numerical method. To demonstrate the power of our framework, it is applied to large language models like GPT-4 and we use the framework to guide prompt engineering in large language models by selecting prompts based on provable bounds on families of informative risk measures applied to problems like chatbot harmfulness and summarization of clinical patient notes.</p>
<p><em>In Person and <a href="https://cmu.zoom.us/j/96386070966" target="_blank">Zoom</a> Participation. See announcement.</em><br /><br /><a href="https://www.youtube.com/channel/UCWFp4UWNiOv71j0sPbdNiqw" target="_blank">CMU Theory Youtube Channel</a></p>
<time datetime="2023-11-29T17:00:00Z">November 29, 2023 12:00pm</time><time datetime="2023-11-29T18:00:00Z">November 29, 2023 1:00pm</time>
https://www.zhundeng.org/
Postdoctoral Research, Data Science Institute, Columbia University
https://www.cs.cmu.edu/~theorylunch/
<a href="mailto:dhathcoc@andrew.cmu.edu">dhathcoc@andrew.cmu.edu</a>
Seminar Series
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Group Viewing Gates Hillman 8102 and Zoom
Wed, 29 Nov 2023 17:00:00 +0000Anonymous222332648 at https://csd.cmu.eduTheory Seminar
https://csd.cmu.edu/calendar/seminar-series-Theory-2023-11-17
<span>Theory Seminar</span>
Traffic21 Classroom, Gates Hillman 6501
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Fri, 11/17/2023 - 15:30</span>
In Person
Agnostic proper learning of monotone functions: beyond the black-box correction barrier
ARSEN VASILYAN
<p>We give the first agnostic, efficient, proper learning algorithm for monotone Boolean functions. Given $2^{\tilde{O}(n^0.5/\eps)}$ uniformly random examples of an unknown function $f:{0,1}^n \rightarrow {0,1}$, our algorithm outputs a hypothesis $g:{0,1}^n \rightarrow {0,1}$ that is monotone and $(opt + \eps)$-close to $f$, where $opt$ is the distance from $f$ to the closest monotone function. The running time of the algorithm (and consequently the size and evaluation time of the hypothesis) is also $2^{\tilde{O}(n^0.5/\eps)}$, nearly matching the lower bound of Blais et al (RANDOM '15). We also give an algorithm for estimating up to additive error $\eps$ the distance of an unknown function $f$ to monotone using a run-time of $2^{\tilde{O}(n^0.5/\eps)}$. </p>
<p>Previously, for both of these problems, sample-efficient algorithms were known, but these algorithms were not run-time efficient. Our work thus closes this gap in our knowledge between the run-time and sample complexity. This work builds upon the improper learning algorithm of Bshouty and Tamon (JACM '96) and the proper semiagnostic learning algorithm of Lange, Rubinfeld, and Vasilyan (FOCS '22), which obtains a non-monotone Boolean-valued hypothesis, then "corrects'" it to monotone using query-efficient local computation algorithms on graphs. This black-box correction approach can achieve no error better than $2*opt + \eps$ information-theoretically. We bypass this barrier by augmenting the improper learner with a convex optimization step, and learning and correcting a real-valued function before rounding its values to Boolean. Our real-valued correction algorithm solves the "poset sorting'" problem of [LRV22] for functions over general posets with non-Boolean labels.</p>
<p><em><a href="https://arxiv.org/abs/2304.02700" target="_blank">Joint work with Jane Lange</a>.</em></p>
<time datetime="2023-11-17T20:30:00Z">November 17, 2023 3:30pm</time><time datetime="2023-11-17T21:30:00Z">November 17, 2023 4:30pm</time>
https://arsenvasilyan.github.io/
Ph.D. Student, Department of Computer Science, Massachusetts Institute of Technology
http://theory.cs.cmu.edu/#talks
<a href="mailto:praveshk@andrew.cmu.edu">praveshk@andrew.cmu.edu</a>
Seminar
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Traffic21 Classroom, Gates Hillman 6501
Fri, 17 Nov 2023 20:30:00 +0000Anonymous222332594 at https://csd.cmu.eduJoint Theory Seminar / Computer Science Speaking Skills Talk
https://csd.cmu.edu/calendar/joint-theory-seminar-computer-science-speaking-skills-talk
<span>Joint Theory Seminar / Computer Science Speaking Skills Talk</span>
Gates Hillman 8102 and Zoom
<span><span lang="" typeof="schema:Person" property="schema:name" datatype="" xml:lang="">Anonymous (not verified)</span></span>
<span>Wed, 11/15/2023 - 12:00</span>
In Person and Virtual - ET
How to make streaming algorithms fast?
PRANEETH KACHAM
<p>In this talk, I'll present our work on a new pseudorandom generator (PRG) which can be considered a generalization of Nisan's PRG. We show that the space-vs-time tradeoff presented by our Pseudorandom generator can be used to obtain streaming algorithms that are optimal in space while having a very small update time i.e., the time required for the streaming algorithm to process an update is very small. </p>
<p><em>This talk is being presented as a part of the <a href="https://www.cs.cmu.edu/~theorylunch/" target="_blank">Theory Lunch Seminar</a> series. </em></p>
<p><em>Presented in Partial Fulfillment of the CSD Speaking Skills Requirement</em>.</p>
<p><a href="https://www.youtube.com/channel/UCWFp4UWNiOv71j0sPbdNiqw" target="_blank">CMU Theory Youtube channel</a></p>
<time datetime="2023-11-15T17:00:00Z">November 15, 2023 12:00pm</time><time datetime="2023-11-15T18:00:00Z">November 15, 2023 1:00pm</time>
https://www.praneethkacham.com/
Ph.D. Student, Computer Science Department, Carnegie Mellon University
https://www.cs.cmu.edu/~theorylunch/
<a href="mailto:dhathcoc@andrew.cmu.edu">dhathcoc@andrew.cmu.edu</a>
Speaking Skills
<a href="https://csd.cmu.edu/people/doctoral-student/praneeth-kacham" hreflang="en">Praneeth Kacham</a>
<a href="https://csd.cmu.edu/research/research-areas/theory" hreflang="en">Theory</a>
Gates Hillman 8102 and Zoom
Wed, 15 Nov 2023 17:00:00 +0000Anonymous222332560 at https://csd.cmu.edu