FAQ

Author
  1. What are the rules for participation grades for General Course students?

Please refer to this page for details.

  1. How can I ask for an extension for a summative assessment?

Send a email request to , specifying the reason for the extension.

Attach a filled in extension request form (you can find one here under the How can I request an extension? tab) and supporting documents to your email.

  1. Am I allowed to use ChatGPT and generative AI in my essays?

Please refer to the Generative AI Policy for details.

  1. How do I add a bibliography to the HTML generated by Quarto?

You first need to have a bibliography file (.bib) at hand. You can generate such a file with software like Zotero (see this link for details of how to create a .bib file based on an existing Zotero collection of references) or Jabref (see this link on how to create the bibliography and this link for details on how to populate it) or with the help of the โ€œciteโ€ functionality1 of Google Scholar (you have an option to choose a BibTeX reference format) or other academic libraries and journals (e.g LSE Library Search, Scopus, ACM Digital Library).

Once you have your .bib bibliography, e.g a file called references.bib, ensure the file is in the same folder as the Quarto file in which you want to make citations from references.bib, e.g if you have a file C:\Documents\test.qmd and you want to make citations within this Quarto file, you need to place your .bib file with the C:\Documents folder.

Once youโ€™re done, you add this line to your YAML header: bibliography: references.bib (this supposes the name of your .bib file is references.bib but you should replace references.bib by the actual name of your file, making sure to respect case and spelling as YAML is case-sensitive and spelling-sensitive!).

After this, you can start citing whichever reference is present in your .bib file within the text. Suppose you have the following reference in your .bib file:

@article{mehrabi_bias_fairness_survey_2021,
author = {Mehrabi, Ninareh and Morstatter, Fred and Saxena, Nripsuta and Lerman, Kristina and Galstyan, Aram},
title = {A Survey on Bias and Fairness in Machine Learning},
year = {2021},
issue_date = {July 2022},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {54},
number = {6},
issn = {0360-0300},
url = {https://doi.org/10.1145/3457607},
doi = {10.1145/3457607},
abstract = {With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.},
journal = {ACM Comput. Surv.},
month = {jul},
articleno = {115},
numpages = {35},
keywords = {Fairness and bias in artificial intelligence, natural language processing, deep learning, representation learning, machine learning}
}

You would cite the reference by adding [@mehrabi_bias_fairness_survey_2021] (mehrabi_bias_fairness_survey_2021 is what we call the reference key) where you need to cite in the text for example


Some review studies [@mehrabi_bias_fairness_survey_2021] show that there are currently more then ten definitions of fairness in AI.

and the result is:

Some review studies (Mehrabi et al. 2021) show that there are currently more then ten definitions of fairness in AI.

  1. How do I change the look of the HTML generated by Quarto?

Please refer to the explanations given in this page for details.

  1. How do I insert an image in the HTML generated by Quarto?

For this, you need to know the path to your image. For the sake of this example, suppose your .qmd is /Users/Documents/test.qmd and the path to your image is /Users/Pictures/image.png.

We can then insert our image by adding the following line:

![](../Pictures/image.png)

The [] is used to specify a caption and is left blank here because we are not adding a caption for this particular image. Then, in between parentheses (()), we specify the relative path to our image: in this case, we used .. to specify that we go to the parent folder of the folder our .qmd is in (/Users/Documents/) i.e /Users/, then from there, go to the folder Pictures, which corresponds exactly to the folder our image is in i.e /Users/Pictures/, then we end by specifying the name of the image.

To make things simpler, you could put your image in the same folder as your .qmd and then the syntax simply becomes: ![](image.png)

If you want to change the size of the picture, you can write the following: ![](image.png){width=70%} (the width parameter allows to specify the percentage of width of the original image you want to keep, it resizes the whole image as it keeps the ratio for height as well).

References

Mehrabi, Ninareh, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. โ€œA Survey on Bias and Fairness in Machine Learning.โ€ ACM Comput. Surv. 54 (6). https://doi.org/10.1145/3457607.

Footnotes

  1. in this case, you would copy directly the references into a new file that you open in VSCode and save the result as .bib fileโ†ฉ๏ธŽ