Okay. Anyone here come to the last night. Little biased, because my friend, but. I thought you did a very good job of explaining in general terms, how is you how they're using. Ai to help. Real progress and improving health care. Thanks. Just saw. So I made some promises about.
I was. Under the weather yesterday. started. today. Discussing last day. talk a little bit about that are, and the question is.
Okay, can you see my questions. board.
yeah OK so.
i'll read it out. So there's the password for attendance today just through nine as a reminder so last day we talk asked Is it permissible for instagram to permit the sudden death, the invaders to be shared on its platform. So following with that is this action, supported by the same code of ethics. Was responsibility is. Is the is the erosion of trust and media and public institutions. And we as future. semester. Or the kind of very future computing professionals, do we have an obligation in this regard. To. deal with a revision of trust. But does it mean for us to uphold the code of ethics and what are the opportunities we have to act. for the public good, in this regard I guess. So your questions are coming. And we'll have a discussion groups and then we'll. discussion. So. example. Think about that. You heard of the program called eliza from the 1960s. So it was. developed at MIT. And the idea was that they would. use a very simple approach to. Formulating questions. In the style of. The therapist might ask. So didn't have any. understanding of. What what this person is responding. To. edit. It asked the right questions of the prompts encourage the person to talk. about what was happening their lives, and so the perfect Professor who developed this was shocked by the amount of track. The. conversations that people are having with this program that wasn't doing anything. Other than a period to understand and be an empathetic listener. You know, so much so that. When people are using this program you know they didn't want people around because they were. They entered personal details in response to the questions generated by the Program. Well, I guess eliza was a different version of the program than doctors, the one i'm talking about. That he. use the same by people's reaction and people's willingness. place trust in the Program. Instead of other people. And he stopped working in the field. of artificial intelligence about plants. So, possibly to that story. In your own interactions. We feel like. computer programs are more objective. You can trust it if the computer says it's. says it so more so than talk to somebody else from friends or relatives gives you information. Okay, so. let's take come on to talk about this sort of this questions give you some food for discussion. So let's. six groups Boston let's do that again. Class here. It can get into one group, if you like. too small. Not. Just.
I think it's.Just.Such a.Good spread.People were.
yeah one to go with that point does it's not.
just giving people access to people.Who are.responsible.
for the greater good of the public, because.
Very good allowed an outlet for people or outrage. To express themselves in this matter where.
Russia, where they are in Russia or.Hearing major.Major on.
Later, get smaller. People.
So, allowing you know. Our.
Can you move a little bit out of the way, so I can get a screen cap of this question, please. Thank you very much.
awesome i'm good to go Thank you so.
First of all, let's face a little bit sense because it's. about the word find the pressure of the Russian Government people to change their decision and all people care about words that. sound like that so. Trying to affect those institutions and businesses puts pressure on the Russian people. For better for worse and. The ideas for the event that much pressure on the government for that to change their decisions about that I agree with decision but effectively that's the rationale, but I. yeah. Reliable trustworthy people may think i've done. Anything interesting computer rather than a person well, if you want them to test on that. Computer so that was the new studies found that it's right and the other people know it's right so. Sometimes on factual information and talking to the person is much easier to. kind of correct does the nail double check they'll walk it through a forest near they're more likely to spots sometimes they don't but consumers either yes or no, what there's no way to. For the computer to verify the information.
So.it's super.option one.
yeah. Situation excited about figure it's just like prompting questions and but the responses in that early computer like it does that changes activates on the discussion questions to me.
that's what it's like if you were like to do and.I think, even in like the justice system are things.
That we talked about our. biases ago but just because of our program with a black person had this sort of high in a way for service or vice versa, will be leading the higher view events or the white person, even though they have the same issue, so it's not it's. Only. me evening yeah learning based off of information if the information. is still good. This.
Is that people are conditions but.
Those morning. It may be with to reflect in your work.
Because you have a very specific problem that's like not just a general thing like. Even even talk to the people that are just. Like oh it's so many steps just not you so that.
brings us to our next question.
Is from getting.
So yeah. it's just it's just more aware about can use. different tools are errors and.
Like, for example in scrappy would be obligated to either not our agents restrictions or increased instructions that working is. The new situation. Responsibility trust.
On instagram.to relax.
You should be.
it's hard to do that, though, just because this media there's. Well, they probably a certain message they can get. well.
They can market themselves to search and subscriber groups if it propagates certain messages, so if they're right wing media that package right big messages they can market themselves right when people are the same, but that said extremist groups.
They may not be able to market themselves as many people as possible, but they don't have to compete with other people, because they only have to say, a certain message, whether that message is true or not. Because of that, I think, responsibility should Government also responsibility to make sure that these companies are. Providing rely truthful information at their sources are. world. Where there's there's data from because it's long as companies have a vested interests.
Economics market who they're selling their messages to to to. Say certain things but it's some government regulations and sub matter what. If this responsibility to regulate inflammation spread by social media.
is a possibility that.
They are strictly more.
So, if it comes back responsibility.
To me. Alongside the. stupid. Okay, so this is a good place to come back to ever get to get together. Good discussion and sound like. zoom. Good discussion online. Yes, I did I didn't do it. Just when I said let's get back together the group by when they closed. Thank you for keeping the. Google search for.
discussions. I suppose, for the first question one last date that the ritual first grams per second we shared on a platform. We mostly just I know. These readings reading these kinds of messages and. impacts people or responsible left, such as racism against Russia, the general. And these messages can absolutely scan. be associated worse. Okay. So that we should keep a high standard for communication online. Put your hand up, I can call on you. So. So, whose responsibility for keeping online or keeping the current conversations related to facts.
aren't sure we debated whether.
Think it's should be responsible.
To at some level.
To promote truth but same time I don't want.
Either one. Saying. So we make arguments. That are sound logically sound. So I mentioned the other day, and what a slippery slope. not always a fallacy to talk about the slippery slippery slope. But. It is if one thing happens, then the worst possible outcomes guaranteed. So it's a fallacy would think. The initial action is always going to lead to the worst outcome so there's no other alternatives. possible. So that doesn't help us to make sound arguments. across these other. So. we'll talk about that some more on. Tuesday Okay, so that's one. thing capture those ideas all right any questions from zoom. Anyone still have a keyboard connected and zoom. Type yes oh indeed Okay, there you go. Thank you. Any comments on here i'm going to erase this now. discussion. He said, with the cat the issue of trust be addressed by eating more artificial intelligence said not really because I it depends on what. is like with the situation you explain when the response didn't matter too much, it can help, but there are situations where people who build the if they have biases they're still going to show up in the air. mentioned last night. So I thought came to me last night with doctors. So they were using. artificial intelligence to understand. pathology reports. and Are to analyze. Coming out sort of like a Google search and these medical records. And they can answer questions about. Which organs are involved in. diagnosis and. What kinds of tumors where. This is related to cancer Center data. Oh that's good to know file server concert 11 is down. So it could discover the structure of these reports and answering creation answer questions about them, so when we do the same kind of thing to avoid disruption. Oh. me improvising a. classroom full of students. So one application that that special talked about last night was. doing a bit of pre processing so that some of the reports that the physicians will complete could be filled in with some details. So that might save the physician some time hearing the reports and it almost might also highlight some aspects. Of the X Ray or the images might warrant further. Examination or maybe focus. Of the direction in which the doctor writing the report could focus. His or her attention. So it wouldn't it's not. Some flat supplanting the physician but. supporting their. Their work and. Making. streamlining the workflow and. allowing them to spend more time on the things that they've very their most qualified to do. The one that. So I think that everybody has been the public launch, so the ones in the public blog those are clear if you print something in the public blog. And you can critique the one that follows years. We didn't. put it in the public blog I will. fill in the rest of those in and assign. same blog entries to the people who aren't in the cupboard flock. To make sense.
For the Blocker take that follows my blog to taking the one that was before me or the afternoon.
I think I said after let me. Say that.
It was posted after so whether regardless of whether it appears before after like a kind of higher displaying. The one isn't as a timestamp later than yours, the next one. yeah okay thanks. Okay, and i'll get that sorted out today. knows which hasn't blocked to critique. Any other comments about. Our discussion here. Do we ask future computing professionals have another edition of the trigger the sort of agreed that we definitely do have an obligation, because. Growing individuals we might have biases that we are aware of it, but even if we are aware of that we should take action to correct it as soon as we get knowledge about. Thinking from time to time with the quote by Martin knievel after the Second World War. And he says First they came for. And it's been adopted different things, but. First they came for anchor and that that that. structure, the first again for one group and I wasn't involved with that group, so I didn't do anything, and then they came for the next group and I didn't do anything and then, finally, they came for me and there is a way to object. So I think that's. In terms of responsibilities, I think we have it's not just speaking out for. People who are. Like us, the ingredients that we can identify with. We have a larger responsibility because we don't. uphold that responsibility, then. we're we're we're not leaving ourselves we're not protecting ourselves, so the best way to protect ourselves is to protect others. Anyway, we'll continue with that on Tuesday. And i'll i'll get i'll deal with all my promises about the critique of the blog today so i'll get things online. Thank you for your attention today. thanks for the big crowd today in class. and have a good weekend take care. And I have office hours now can lose 233 30 on zoom anyway. To come to my office in person, you can do that as well. Okay, thanks again everyone take care.
Zoom Chat Transcript
people on zoom?
will the deadline of the critique assignment still the same? same as the web group assessment?
Now they are different: March 25 and April 4. We can talk about that next week.
Can you also explain critique of a blog entry. Like how we have choose and how to write as well.
Please take a look on Dr. Hepting’s website: http://www2.cs.uregina.ca/~hepting/teaching/CS-280/202210/#ssAsgn_COB
I would recommend emailing Dr. Hepting or coming to his office hours if you have any specific questiosn
That I have seen but not sure which we have to select. Little bit confused
I am actually confused about how to think of the last question
If you put your blog in the public blog, critique the one that is after yours. If you are the last entry, critique the first one. If you didn’t post in the public blog, Dr. Hepting will assign one to you
Critique the blog entry that has a time stamp after yours
When do you hope to post your example critique?
The most important thing that I encountered:
the most important thing that I learned is the biases of developers may still be present in the systems they create.
Having an Ai to control enforce policies can be fundamentally flawed due to the biases the Ai's programmer.
The most important thing that I learned in the lecture today is that you can't completely rely on A.I. to replace people, because those A.I. will still have the biases of the people who developed them, but A.I. can be used as a tool to work with people to make jobs easier. ex: an A.I. that processes pathology reports and makes it quick and easy to write them and understand them so that doctors can spend more time doing other work.
An AI is limited by their creators and will thus have the same bias's that its human creators will have. Although they can be a very helpful tool to simplify, organize and gather information in the end it is our responsibility as people to evaluate and critically examine online sources.
We can use Aritificial intelligence to deal with bias in the filed of media and journalism. the site’s artificial intelligence (AI) chooses a story based on what’s popular on the internet right now. Once it picks a topic, it looks at more than a thousand news sources to gather details and rewrites its own article. However, we argue the fact that an AI is only as true as the computer professional who creates it since he/she might have some cultural inherent bias of their own.
One thing I found fascinating is that some AI programs can learn a topic/idea without the use of human knowledge, hence removing any biases.
Todays discussion about if we should rely on artificial inelligence as a tool to support our evaluation and critical examination of online sources. In that regards i think, it is rather our responsibility to varify the online sources. As at the end of the day, even the AI follows the algorithm that we create.
AI has improved healthcare, especially diagnosing patient
AI wouldn't be really able to help us in identifying the Truth
In today's class, professor told us more about the critique of blog entry and that he will assign the blog which we'll have to critique if we didn't put out critique on public. There was also continuation of the discussion that we had last time about the Instagram allowing hate comments like death to invaders and such things. This is sensitive and should be properly managed.
In today's class, we talked in the breakout room about slippery slope. It also talked about the ethical issues of ai and how to view online posts critically.
The most important thing I learned is that AI can also be biased. I thought AI would not be biased since it is just a machine, but I did not consider the fact that it is also made by humanss. Humans have biases and thus if they control AI, the AI will also have the same bias with its creator. I thought it was interesting how AI also have bias.
It is hard to know whether we can just an AI or not because there has yet to be an AI that is 100% independent from human supervision that people can reliably use.
The issue of trust is still not going to be fully addressed using AI because biases in choosing data sets for the AI to be trained on as well as the code in creating the AI will be reflected. Thus, trust is not being fully addressed. Trust may be more addressed in some cases though compared to humans depending if there is limited amount of biases in creating the AI
Related to the issue of trust by using AI, I believe we cannot hand AI the torch to evaluate truth of info, because the programmers who create the code and algorithm have biases as well when it comes to information online. So, instead of hearing or learning the truth, we are presented the developer's thoughts towards the situation
Was that technology companies can make very impactful choices that affect all users on their platforms.
I learnt that AI isn’t necessarily the primary solution to fixing the disinformation problem, but it can be one of the tools we can use. It can be used to check our own judgement about things, but not intended to be the main tool in which people should make decisions or base their reason. On the other hand, people might become too trusting of machines and use it to think for them, rather than as a factor in decision making.
We as computing professionals should keep a high standard for our communication on online platforms. It is our responsibility to promote truth and to avoid logical fallacies in arguments and adhere to conventions. It is crucial for us as computing professionals to identify and analyze the biases that may be present in the systems we create.
I understood the need for Instagram to be more active (for lack of a better word) at policing content with sentiment for public good and how to balance it in comparison with an individuals freedom of speech
I learned that according to that Instagram lifting the ban imposed on hate speech “Death to the invaders” while being positively viewed in terms of PR, is not entirely ethical. The ACM code of ethics guidelines state that well intended actions that may lead to negative consequences has to be avoided. Posts made on Instagram In that direction has the potential to incite violence against Russians.
In Today's meeting, we have discussed about the action supported by ACM Code of ethics? and whose responsibilities is the erosion of trust in media and public institution. We also have discussion that in future we will have computing professionals have an obligations this regard?
We talked whether or not the hate towards the invaders in Ukraine should be allowed to be spread over various social media platforms. We also talked about sentiments like "Death to the invaders" should be allowed on platforms like Instagram or not, and the impacts of allowing or disallowing them. We also considered ACM rules of ethics while discussing this scenario.
In todays lecture we discussed about the use of AI in health care and also discussed about the Blog critique. We also continued the discussion of the Instagram should permit the sentiment of the Death of invaders. Also in the class discussion we discussed about the high standards of the online discussion.
The most important thing i learned today in the class is ACM code of ethics And the question on the instagram actions support the public good. then the ideas on the Artificial intelligence and how it can contribute in health care.
In today's class , we have discussed about AI, like it is helpful to us as well as are we really using it or not?
It is really interesting to learn this topic which we learned in our class today humane interface has been a topic to discuss in class as it is so good to learn this things and i m really enjoying this class
Today we talked about the contribution that artificial intelligence can make to health care. Discussed (Alphatera/Alphafold) learning without using human knowledge (+ bias)
AI can only be trusted to some extent when it comes to social media or personal information
Instagram does not really support the public good in my opinion. There have been studies done that show instagram can be detrimental to mental health. On another not, I believe the use of artificial intelligence is needed for sorting through the mass amount of data stored by instagram. I dont know if I would consider it more affective then us fallible humans but there is no question it is more efficent.
Today the standard of content on Instagram and other social media sites were discussed. Online sources should follow AI code of conduct before permitting sensitive uploads. Actions on negative users should also be taken. Related topics were discussed in breakout rooms.
As computer proffesionals, we have an obligation to keep online forums open to free discussion while also making sure the discussions aren't hate speech.
trust is from human to human, if replaced human with AI, then the whole thing becomes meaningless
The most important this i learn today was ACM Code of Ethics and Professional Conduct.
In todays class, we discussed about the contribution of AI. We also discussed about many topics.
the most important thing that I learned Instagram's actions do not support the public good but rather it supports the individual good on the expense of people. Also the programmers of AI should follwow the ACM ethics and should not be bias while creating the codes that can be used as filters to prevent hate or slurs speech on social media. Alos, AI is beneficial but we cannot rely on it because it can be incorrect sometimes.
AI will be trustful if we know how to control AI. And until now, Ai still can not do much work, it is more important for human to develop the AI inside of notice the danger of the AI
AI provides a very good condition assessment technique that can be further combined with current condition-based maintenance decisions, but also the same AI is subject to the limitations of its developers and we need to look at the resources it provides with a critical attitude.
The most difficult thing for me to understand:
about the website assignment
I am currently experiencing COVID symptoms and barely followed up on today's lecture. I experienced symptoms last Tuesday and tested positive yesterday. Before going to your class, I tested negative and knew that things will go downhill from there. Hopefully, you will accept this short response. I did it before resting again.
Who's responsibility it is to police online speech
The thing about which I would most like to know more:
The topic we discussed was related to instagram and their responsibility regarding what to show to the pulic. I came accross many content that is available on instagram and questionned whether it should be a platform like instagram whose audience is child as well. I would like to know more about how these things should be managed so that no one have bad impact from these applicatios.
How well does current AI work in the fields of fact checking?
I would like to know more about what goes on with the algorithims companys create
I would like to learn more about AI, we were speaking of it in class and someone raised the point that it could be infected with the biases of the programmer. I would like to know if that is completely true, and if a true AI could progress or learn beyond those parameters.
when does AI go to far and should we reach that point
Is the ACM Code of Ethics in favour of this activity?
I would like to know more how Al as a tool can support our evaluation and critical examination of online source.
In today's class we all talked about artificial intelligence and instagram and the discussion was quite very exciting moreover I would like to know more about submission of critique.
Today we learned about the possibility to train algorithms without using human knowledge. I want to know more about this process. Such as why it isn't being used more? Are there any downsides to it? Does this really result in it not having human biases or does this simply create different biases from biases in the data?
I would like to know more about how Artificial Intelligence could be more useful to address the Bias events and hateful speech in social media platform.
I’d like to know more about AI and the general consensus on the helpfulness of AI without human bias
Can an AI ever portray the decision making process of a human. There is the fear of AI of replacing and fully automating the workforce but to do so is understanding human decision making. Some jobs need human bias and judgement in order to function like doctors understanding the patients emotions or programmers who accept the ambuiguity of descriptions. Is it possible to simplify their actions into an AI? Even if they were replaceable, would the objectivity of trust make people choose them over humans?
today we discussee about how the media influences our opinion toward a same thing.
I'd like to know more about laws and ethics regarding content and social media / communications companies responsibilities. I understand that under certain laws, companies are not entirely responsible for user generated content. For this reason, I don't believe social media companies are responsible for what is posted online. I would like to learn more about effective methods of changing people's minds and convincing people to be open minded and critical of their own beliefs.
I would like to know more about AI that learn without using human knowledge. How do these machines go about learning and what sort of subjects are they learning about? How successful are they?
I'd love to learn more about AlphaZero and AlphaFold - I have some background in Machine Learning, so I find it hard to believe that there can be any way to create an AI to judge this without some sort of bias from its creators. For example, with machine learning you need a training and validation set - the former is used to teach the model what to look for, and the second is used to see how effective the training was. Who decides the dataset used? Is this a technique that is not machine learning?
I Would defientely like to know more about AI contributing to healthcare and overall society and the future.
the thing i would most like to know more about is if AI would truly be feasible as a misinformation censor/detector. To make an effective AI without bias that could work it would be difficult but if it could be done the result is interesting to think about.
I would like to know more on the efficiency of an AI and the probability of the AI to not go rouge. Humanity is heading in the direction where we building and creating technology that make things easier for ourselves and the world we live in and AI is one of those. I am curios in how positive and negatively it will affect humanity when we fully utilize AI in our day to day lifes.
I would like to know more about how IA can help to identify reliable sources
In today's discussion we discussed about Can we trust artificial intelligence to make decisions for us? AI's ability to not only make predictions, but explain why it made them is especially important in healthcare, where a wrong prediction can cost a human life. ... “If we are do build trust between man and machine, you have to be able to backtrack AI's suggestions and question how it got to them,”
I would like to know more about Machine learning. AI that learn without using human knowledge and how are they different from AI that learn using human knowledge.
biases in the development of AI and the role of everyone to uphold truth in online platforms.
I would like to know what the current responsibilities are for media corporations and governments in keeping the media truthful and honest. I would also like to know more about other countries governments and media corporations and how they manage their responsibilities to be truthful and honest in the media.
the thing which I would like to know more is opinion about AI help? issue of trust addressed by AI? in my opinion I think complex issues should be reviewed by human not by AI for example creating awareness by activists , journalists covering about war information and controversial comments that can radicalize people
I would like to know more on if we could rely on AI as a tool to support our evaluation and if so then how?
How artificial intelligence can be used in areas of society that we wouldn't normally think that it would have a role to play? For example, can AI be used to improve education outcomes in our schooling system? What would that look like? I feel there is a ton of untapped potential here while also a very strong need to be cautious.
The topic of artificial intelligence is an fascinating and interesting area to discuss which I would love to keep learning about.