Skolan för
och datavetenskap

Course analysis for DD2441/DD3441 Seminars on TCS: Communication complexity, Autumn 2012

Course data

Course DD2441/DD3441 Seminars on TCS: Communication complexity (MSc-/PhD-level course code)
ECTS credits 6 ECTS credits
Semester/period Autumn 2012, period 1
Examination Problem sets
Lectures 16 lectures of 2 x 45 minutes (including 7 guest lectures by 3 guest lecturers)
Course literature Some material from Eyal Kushilevitz and Noam Nisan: Communication Complexity, 1997; research papers; hand-written lecture notes
# students attending first lecture 17 students (10 MSc students; 7 PhD students) plus 3 listeners
# students completing some course requirement 16 students handed in solutions to the first problem set (out of which 14 passed)
# students passed 10 students passed the MSc-level course DD2441 (grade A: 4, grade B: 2, grade C: 2, grade D: 2, grade E: 0) and 4 students passed the PhD-level course DD3441
Performance ratio (prestationsgrad) 89% according to VIS for DD2441; no such data available for DD3441
Passing ratio (examinationsgrad) 89% according to VIS for DD2441; no such data available for DD3441
Main instructor Jakob Nordström
Other instructors Massimo Lauria graded problem sets and gave two lectures; Joshua Brody (Aarhus) and Troy Lee (Singapore) gave guest lectures.


DD2441 Seminars on Theoretical Computer Science is an advanced course in theoretical computer science with varying content which is given every second year (alternating with the course DD2446 Complexity Theory). The general idea is to cover some area of active research in theoretical computer science. Lectures are often based on research or survey papers in that area, and/or on lecture notes from other advanced courses, since the material might be so recent that there are no good textbooks yet. In the autumn of 2012, the topic of the course was communication complexity.

The students found this to be quite a challenging course (which it is intended to be) but most students also found it very rewarding. Student opinions were evaluated not only at the end of the course in the traditional course evaluation, but also throughout the course in anonymous written "opinion polls" after lectures. This turned out to be very helpful, since it made it possible to follow how opinions developed during the run of the course. At the very end of the course, a couple of students got very upset when they did not quite get the grade they expected, and used the opportunity to express their dissatisfaction very clearly in the free-text comments in the course evaluation. Looking at the opinion polls, however, it seems clear that the harsh comments in the course evaluation are not representative of student opinion, and that even the critical student themselves did not harbour these views during the run of the course.

There were substantial changes in the organization of the course compared to last time it was given. The amount of lectures was back to the traditional 16 x 90 minutes, and it is hard to see how the course can be given in a satisfactory way without this set-up (a view that is strongly supported by the course evaluation). We used Piazza for teacher-student interaction, which worked well. We had several guest lectures by world-leading experts on communication complexity, which was highly appreciated. Having a teaching assistant doing the grading of problem sets was also helpful, since the workload on the main instructor when reading up on and teaching advanced research-level material is fairly significant.

In view of the many changes, however, there were also some aspects of the course that did not turn out quite as well as hoped for. Some adjustments could be made already during the run of the course thanks to feed-back from the opinion polls, but there is also room for improvement when the course is given next time. Students rightly complained that the grading of the problem sets was a bit too strict. This was due to misunderstandings between the main instructor and the grading teaching assistant, and was later corrected. By then, however, a few students had been really upset by the grading of their problem sets, and this was unfortunate. Another complaint was that the number of credits was too low. Next time the course is given, the intention is to raise the number of credits to 7.5. Also, in order to give the students time to digest the material, it would probably be preferable to give the course over two periods instead of trying to cram everything into one period.


The course was attended by 17 students (out of which 10 MSc students; and 7 PhD students) and 3 listeners who did not take the course for credit. 10 students passed the MSc-level course DD2441, including one of the PhD students. The grade distribution was as follows: grade A: 4 students, grade B: 2 students, grade C: 2 students, grade D: 2 students, grade E: 0 students. 4 PhD students passed the PhD-level course DD3441, which had exactly the same course content, but where the requirement to pass corresponded to a C on the MSc-level course.

It should be said that for an advanced theoretical course like this, having 14 students pass the course is quite high by any standard. (Having 10 students pass the MSc-level course can be compared to previous offerings of DD2441: in 2007 a total of 5 students passed the MSc-level course, and in 2011 only 1 student passed the MSc-level course according to data from the KTH VIS system).

Course goals

The course was intended to

  • give an introduction to communication complexity,
  • survey some of the most recent exciting results in this area,
  • give examples of applications of communication complexity in other areas of theoretical computer science, and
  • present a number of open problems right at the frontier of current research,
so that that students after having completed the course
  • should have a good grasp of the fundamentals of communication complexity,
  • should be able to reconstruct the proofs, at least in principle, of some of the major results during the last 10-15 years, and
  • should be well equipped to attack open problems in the area.

Actual course content

A fairly detailed description of the course content can be found in the schedule at


The course was given in period 1 in the autumn of 2012. The schedule for the course was changed based on a Doodle poll combined with an (anonymous) opinion poll in class at the second lecture. Based on the results of that poll we also decided to use Piazza for teacher-student interaction.

There were 16 lectures of 2 x 45 minutes on the course, including guest lectures by

  • Joshua Brody from Aarhus University (3 lectures),
  • Troy Lee from the Centre for Quantum Technologies, Singapore (2 lectures),
  • Massimo Lauria (2 lectures while the main instructor was travelling).
The students were very happy with the quality of the regular lectures. Troy Lee also got very good reviews and Joshua Brody stellar reviews. The opinions about Massimo Lauria's lectures were a bit more mixed but still good. All of the students thought that having guest lectures was a good or very good idea.

All lectures were given in English.


The course examination was done by 4 problem sets (the original plan was for 5 problem sets, but this was adjusted during the run of the course). Using problem sets is the traditional way of examining advanced TCS courses. The students found the problems challenging but interesting, and seemed mostly happy with this form of examination. As mentioned above, it turned out that the main instructor and the teaching assistant were not quite synchronized at the start of the course regarding how problem sets should be graded (in particular, some problems were graded as either full-score or no points at all, leaving little room for partial credits). This was later corrected, and a course opinion poll at the end of the course indicated that students were mostly happy with the grading. In the course evaluation, there was instead some very harshly worded criticism about the grading, but based on side information this seems to have come from a couple of students who were not quite happy with their final grade. Some students expressed concern that they did not understand the principles for how partial credits were awarded, which was understandable since this had not been managed very well at the start of the course. This was taken care of for the later problem sets by publishing notes on Piazza about the principles of grading, after which there were no further questions about this. Some students felt, though, that the option of giving partial credits should still have been exercised more. Partly, this seems to have been due to the feeling that if a student has written at least something, then there should automatically be at least some points awarded — a position with which the main instructor does not agree — but certainly there were also problems where partial credits could have been awarded more generously. This was partly corrected by revising the grading after discussions between the main instructor and the grading teaching assistant, but in general the grading of the problem sets is an aspect of the course that needs to be further improved the next time the course is given.

In an opinion poll in class at the second lectures, students were also asked about whether they wanted to have scribing of lectures notes as an additional form of examination. The vote was essentially split (9 against, 7 in favour) but more importantly it was pointed out that the question asked was unclear. On September 3 we had a repeat vote in class on two alternatives: (a) 5 psets, (b) 4 psets plus scribing of one lecture. Again we got essentially a split vote (8 against, 7 in favour), based on which the instructor made the decision not to have scribed lecture notes (incidentally, the number of problem sets turned out to be 4 anyway, but this was due to later adjustments in planning). With hindsight, the decision not to do scribing was definitely the right one. Doing scribe notes is very labour-intensive, not only for the students but (above all) for the instructor, and this time was better spent on preparing interesting lectures, constructing good problem sets, and providing feed-back.

Course literature

The course had no official textbook, since there is no good textbook covering developments in the last 10-15 years in communication complexity. Some of the (mainly introductory) lectures were based on Eyal Kushilevitz and Noam Nisan: Communication Complexity, 1997, which is truly excellent but by now fairly out-of-date. For the lectures on information theory we used material from an introductory chapter of Thomas M. Cover and Joy A. Thomas: Elements of Information Theory, 2006. Other than that, lectures were based on research papers.

At some early lectures, students received a copy of the lecturers own hand-written lecture notes, and since this was very appreciated (as also evident in the course opinion polls) it was decided to hand out such notes for all (regular) lectures and also to scan them to PDF and post them on the course webpage.


In the prerequisites on the course webpage, it said that "[t]he course is open to anyone, but the main target audience are advanced Master's students and PhD students. As stated on the Study Handbook webpage you need to have taken DD1352 Algorithms, Data Structures, and Complexity or DD2354 Algorithms and Complexity or corresponding courses at other universities (and should feel comfortable with that material). You will also need to know some probability theory and basic linear and abstract algebra. There are no additional formal prerequisites on top of what is stated in the Study Handbook, but you will need mathematical maturity and a willingness to learn new stuff." Students found this to be a reasonable description.

Changes from last time the course was given

Since the topic of this seminar course changes every time it is given, by default different offerings of the course are very different. This time, however, there were also fairly major changes in overall course organization.

DD2441 was previously taught by Johan Håstad. Due to budget constraints, in the last few offerings of the course the amount of lectures was halved while the amount of material covered was supposed to be kept at the same level. Looking at the course evaluation and course analysis from the last time the course was given, it seems clear that such a set-up simply was not working very well (not very surprisingly).

This time, the amount of lectures was therefore changed back to the traditional 16 x 90 minutes. There was no increased funding for the course, however, and so the additional time for lectures, as well as the time for teaching assistant grading and all costs related to the guest lecturers (who did not receive an honorarium but whose travel and lodging expenses were covered) were all paid by the main instructor's own research grants.

Another change was that the social wiki-style forum Piazza was used for teacher-student interaction. This worked very well. An especially valuable feature of Piazza is that it has support for LaTeX formatting of math, which is almost indispensable for a course like this.

The course had a significant amount of guest lectures, which was highly appreciated. The use of a teaching assistant help decrease the workload of the main instructor (which as mentioned above is significant for a research-level course like this, when preparing lectures almost exclusively involves reading up on, digesting, and presenting research papers). However, principles for grading of problem sets should have been synchronized between instructors more thoroughly and right at the start of the course.

Course evaluation

The course was evaluated by a course evaluation conducted after the course. A total of 15 course participants filled in this evaluation, 12 of which had passed the course, 2 of which had not finished, and 1 of which was a listener. 9 of the respondents (60%) were MSc students, 5 (33%) were PhD students and 1 respondent was postdoc or faculty. We also had a number of "opinion polls" conducted at various intervals during the course, namely on September 10, September 20, October 4, and October 17. These opinion polls were also used to adjust the course planning as we went along.

The above documents contain troves of information about different aspects of the course and about student opinions regarding these aspects. Below follows an attempt to summarize the main points. The percentages do not always round to 100% in the ACE evaluation tool used but have been left as computed by the tool. (As noted above, one or two students got very upset with their results after the final problem sets had been graded, and basically used the free-text comments in the course evaluation form to insert rants about the course and personal attacks on the instructors. Such comments have been mostly ignored in this analysis, as it is clear from the other respondents as well as from the four course opinion polls that these views are simply not representative.)

27% of the respondents found the course medium hard, 33% fairly hard, and 40% very hard. 47% found the course very interesting and 53% fairly interesting. The course goals were completely clear, and 85% found that the course prerequisites were clear. 73% considered themselves to have the background required to follow the course, 20% did not know, and 7% answered no to this question.

87% of students referred to the course webpage on a weekly basis or more often, and all of them found the webpages fairly or very good. 63% appreciated the course opinion polls and one commenter explicitly highlighted that it was good that this made it possible to improve the course not in time for the next course offering, but right now. Two commenters expressed disappointment that not all preferences of the students expressed in the polls were taken into account, however. Student feed-back on the use of Piazza was positive. There were no negative comments at all, but some course participants said that although this tool was nice, in such a small course we might have been able to get by without using it. (From the instructor perspective, though, it simplified matters considerably even for a course of this size, and since students were overall very positive this seems like a strong argument in favour of using Piazza.)

Lectures were very well-attended — 13% of students said they attended 60-80% of the lectures and 87% attended more than 80% of the lectures. The number of lectures was considered to be appropriate. 33% found the regular (non-guest) lectures very good from a pedagogical point of view and 47% found them good. 13% thought them acceptable and 7% (1 student) found them fairly bad. Some commenters gave very positive free-text reviews:

  • Great lectures. The content is usually quite complex (at least for me), but very well explained.
  • In my opinion optimal with regard to the hardness of the material. I think it's hard to appreciate it until you see someone else try to explain material of the same hardness.
Other commenters pointed out areas where the lectures can improve:
  • The lectures were too few compared to the amount of material presented. The selection of the topics to include in the course has to be done way more carefully.
  • A problem is that the interaction with the students is low and the content is hard. The lecturer has troubles finding out what the students have understand, and the students have troubles telling him.
  • A bit to fast at times, but I guess that was my problem. If I asked questions everytime I didn't follow, we wouldn't get anywhere.
47% found the pace of the lectures about right and 40% found lectures a bit too fast. 1 student (7%) each though that the lectures moved way too fast and a bit too slow, respectively. Judging from the opinion polls, MSc students found the pace of the lectures a bit more challenging than the PhD students, which seems natural. Getting a 50-50 split between "right pace" and "a bit too fast" was what the main instructor was aiming for — if students get exactly everything during the lecture, then it means we are probably moving too slowly. 53% of the students "sometimes" prepared for the lectures in advance (by looking through the notes posted on the web in advance, or by going over notes from previous lectures), but 33% did it "seldom" or "pretty much never." If one could somehow get more students to engage with the material in between lectures, they would probably have an easier time following the lectures as well.

All of the students thought it was good or very good to have guest lectures. 100% found Joshua Brody's guest lectures good or very good, 67% found Troy Lee's lectures good or very good, and for Massimo Lauria this fraction was 46%. A clear majority of the students found that the balance between "pure" communication complexity and applications in other areas of theoretical computer science (e.g., property testing, streaming algorithms, and proof complexity) was about right.

During the lectures, we tended to make the first lecture to be 50 minutes long and then only have a 10-minute break (instead of 45 minutes followed by a 15-minute break). 34% of course participants thought this was good or very good, and 53% voted for "OK/neutral as to whether this is good or bad." Sometimes the second hour of the lecture, too, finished 5 minutes late in order to wrap up the material for the lecture (though when this happened it was more of planning failure from the instructor's side than something intentional). Again 53% were "OK/neutral as to whether this is good or bad," while 27% found it fairly bad and 13% found it fairly good.

Looking back at the discussion about whether to scribe or not scribe the lectures, students were mostly happy with the decision not to scribe and also with the way the decision had been taken (where a clear student majority would have a decisive vote but a weak majority either way could be overruled by the lecturer).

The problem sets were considered to be medium hard by 20% of the respondents, fairly hard by 47%, and very hard by 27%. 60% considered the hardness level to be appropriate, while 20% found the problems a bit too hard and 13% far too hard. Again, MSc students appeared to feel that problems were slightly harder than did the PhD students. The number of problem sets and problems per set were considered to be on the high side but the amount of time given to solve the problems was about right. The content of the problem sets was assessed to correspond well or very well to the contents of the lectures by 74%.

As mentioned above, the grading of the problem sets was a bit of a contentious issue. In the opinion poll taken at the final lecture of the course, a clear majority of respondents thought the grading was appropriate (based on the first two problem sets). In the course evaluation, however, 40% found the grading a bit too harsh and 13% thought it was far too harsh. 40% still thought it was "about right." These numbers need to be improved the next time the course is given. As to the amount of time spent per problem set on average, 7% of respondents assessed that they spent 5-10 hours, 20% spent 10-20 hours, 40% 20-30 hours, 13% 30-40 hours, and 13% more than 40 hours per problem set on average. In the comments, some students complained that the final problem set was posted only at the very end of the course. It is hard to see how the material in the last lectures could have been covered otherwise, though. Other comments note that the problem sets were indeed hard, but that this was advertised as a challenging course and so this was OK. As mentioned above, a couple of commenters were really upset about the grading, but unfortunately it is hard to see anything constructive to take away from these comments.

53% of the students spent 30-70% of their total study/working time on the course while it was being given, with the rest roughly split between a workload below and above that, respectively. 40% thought that 6 ECTS credits was about right, but 60% feel that more than 6 credits would have been appropriate. The main instructor believes that the latter category is right and that the amount of credits should be raised to 7.5 ECTS credits.

In the concluding questions, the students were asked what they thought about covering a similar amount of material with half the amount of lectures (as the course had been run during the last few rounds). The answers were overall very sceptical towards this idea.

There were few replies to the question what aspects of the course should stay the same in a future course offering. Two commenters wrote that most aspects of the course should stay the same, while two commenters provide a much harsher, but not very constructive, assessment.

Regarding aspects that could be improved, here are some of the suggestions given (not quoted verbatim but edited for context) together with comments added by the main instructor:

  • Publish solution to exercises so that one can understand what one did wrong. Comments to this effect were provided in the graded solutions. In addition, it would indeed be desirable to publish complete solutions, but it would not be feasible if the workload for the instructor is to be kept within reasonable bounds. If one instead had a round of peer evaluation/grading for each problem set accompanied by discussions on Piazza, this could hopefully provide a similar kind of feed-back at a much lower cost.
  • Finish the course on time (i.e., in period 1). Again, if the problem sets are to cover materials also from the last lectures, and if two weeks should be given to work on each problem set, then the deadline for the final problem set will have to be in the first days of the next period. The solution here is probably to give the course over 1½-2 periods, which as noted above is desirable also for other reasons.
  • Use the possibility of awarding partial credits for problems. Indeed, this was a problem with the grading of the problem set, which in turn was due to the main instructor not giving clear enough instructions to the grading teaching assistant. The grading of problem sets should be improved the next time the course is given.

The two final comments in the course evaluation summary seems to sum up fairly well how most students seem to have felt about the course:

  • Thanks for the course, really liked it! Cool to get an intro to real cs research.
  • It was so demanding, but I learned a lot.

Conclusions and future changes

All in all, this seems to have been a successful course. One central question, of course, is to what extent we could and should offer advanced, research-level courses at KTH CSC and if the school is willing to pay what such courses cost. On the one hand, if we want MSc students to come into contact with research, which seems to be the officially stated goal, then that should be a strong argument for giving this kind of courses. On the other hand, a serious problem is that in the system of accounting for courses used at KTH CSC, courses have to attract 25-30 students before they are considered to "break even." For an advanced theoretical course, having such a number of students is simply not realistic. Even at major North American universities with sizeable TCS groups, having 10-15 students attending and passing an advanced TCS course is a very good number.

As mentioned above, the topic of DD2441 changes every time it is given, and so the changes in the next course offering will automatically be significant. When it comes to overall organization of the course, the following changes should be considered:

  • The number of ECTS credits should be increased to reflect better how demanding this course is. While it is important to avoid credits inflation, and while it could be argued that the credits of an advanced theory course might often seem a bit "expensive" to the average student, a level of 7.5 ECTS would seem more appropriate.
  • A large part of the challenge with an advanced theoretical course like this is for students to have the time to digest sometimes fairly abstract and deep mathematical material. Therefore, it would seem desirable to give the course over two periods instead of just one, to have a less hurried pace and give students more time to reflect. This would also make it possible to have the deadline for the final problem set before the official end of the course period.
  • The grading of the problem sets would need to be made more transparent and consistent, especially if a teaching assistant will be used for grading. One interesting idea for the problem sets would be to experiment with a system of peer evaluation, where students would tentatively mark each other's submitted problem set solutions (using a random assignment of solutions to students) before the instructor does the final grading. Apart from increasing the understanding of how problem sets are graded, and of why a certain submitted solution was or was not correct, this would hopefully (and more importantly) lead to improved learning by forcing students to work more deeply with the material and getting them to think about what is good or bad exposition.
  • The current form of examination with problem sets trains written skills only. One idea in order to also train oral skills, and to see if students have reached the level where they can read and understand the research literature (which is one of the goals of the course), would be to introduce a requirement to read up on and present a research paper in order to get the highest grade A. This could then be a compulsory requirement on the PhD-level version of the course.
  • It would be desirable to experiment with features that would encourage (or make it necessary for) student to work on the material in between lectures and prepare for lectures. Perhaps some of the background material could be assigned as preparatory reading, and some lectures could be made into more of flipped-classroom-style exercises. This needs to be thought through carefully, however, since it is not entirely obvious how to do this for an advanced math/TCS course where even grasping the definitions can be a major part of the challenge during a lecture.

As a final remark, which is valid not only for this course but for any advanced course of a mathematical nature, it should be stressed again that course funding is a big challenge. This time, half of the course was paid for by the instructor's own research grants. It is not clear that this is a sustainable, long-term model of funding. This seems unfortunate, since one explicit goal of our education of MSc students is to get more connections to research in their courses, and a course like this provides exactly such connections.

Copyright © Sidansvarig: Jakob Nordström <jakobn~at-sign~kth~dot~se>
Uppdaterad 2015-01-31