by Jean-Cédric Chappelier -
Number of replies: 2

Dear INLP students,

looking at the Doodle, the question answering session will take place (on the usual Zoom):
Monday Jan. 18th at 9h15.

In order to prepare, I set up a SpeakUp Room (number 53422) to collect your questions (before adding one, please check there is no similar one). Also vote for questions you'd like to have the answer.

Best,

by Jean-Cédric Chappelier -

Please do not forget our a SpeakUp Room (number 53422) to collect and VOTE for your questions for Monday morning QAS.
Please do it before Sunday noon.

by Jean-Cédric Chappelier -

the (purely raw!) record can be found on our channel: https://go.epfl.ch/inlp-videos (direct link: https://tube.switch.ch/videos/5445867a).
Here is the list of questions:

1. IR

1.1 Could you explain again how to make a tf-idf matrix?

1.2 Could you please explain the differences between R precision
and
average precision?

======================================================================
2. Parsing

2.1 How can we compute the probability of a parsing tree efficiently?

In the case where the product of rules in a tree does not represent
the true probability, we need to get all possible trees of a sentence
and compute their probability as well before computing the actual
probability of a tree, but this can be time-consuming.

2.2 Given two syntatic rules S -> A B C and S -> A B, when converting
this to Chomsky normal form, can I simplify S -> A B C to S -> S C
instead of introducing a new category X and doing it as S -> X C, X ->
A B?

2.3
Exercise 12.4: why is the rule "VP -> VBe Adj+" as opposed to "VP ->
Could we have another example of changing the grammar to prevent
over-generalization?

======================================================================
3. Morphology

3.1 Could you give more FULL examples of using transducers?

(for instance going fully through the lexical lookup, regular
inflection, and exception handling transducers in exercise 5.5b?)

3.2 Can you please explain on hand on morphology question 3

======================================================================
4. Neural Nets

4.1 For the Neural Network Architectures (Old Exam Exercise 5). Do we
consider weights for a softmax output layer from the Last Hidden
Layer, or do we directly process the output?

Addionally, in regard to RNN, what would be the size of the Weight
Matricies?

======================================================================
5. General

5.1 Will the exam have that much open questions like last year's?

Like part 1 of question II in the old exam, it has 8 points, but it's
not really clear how much we should write, and I don't know whether my
thoughts match what you expect.

======================================================================
6. Lexical distance

Are there any 'tricks' when partially filling the error correction lookup table? like
lower bounds? I find that I can 'guess' what the right answer should
be but never sure, since that would require filling most of the table.

======================================================================

7. Misc free questions