Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Students think the College Board is running a Reddit sting (vulture.com)
225 points by hhs on May 17, 2020 | hide | past | favorite | 184 comments


I took the AP Chem exam last Thursday, fortunately without incident. One of my friends was unable to submit his AP Literature assessment, facing the same error popup that has plagued so many of the other students. I've heard from several other students in my district that have had a similar problem. My brother had an AWS Lambda error message when he tried to access his exam the first time; CollegeBoard had surpassed their Lambda service limits (Fortunately he was able to access his exam after trying again).

Their online tooling has never been great–AP Classroom takes at least 3 tries for me to access, one to log in again which then redirects back to AP Central, one that just redirects back to AP Central, then finally one that takes me to my assignments page.

This certainly isn't an easy problem to solve. CollegeBoard definitely should have designed a more stable system to host their exams on, but as with anything digital, there's always an opportunity for error–even moreso since everyone takes the exam at the same time (which is an entirely different can of worms, with students in Guam taking exams at 1a, 3a, and 5a).

If I were to design these assessments, I'd have it be an untimed, synthesis-based assessment, more similar to AP Computer Science Principles[0], with their Digital Portfolio. Simply having a deadline for all students to upload to a basic tool (which already exists as CB's Digital Portfolio system). Like the current exam, have the submissions sent back to teachers for validation (there's nothing on the current system that validates your identity, simply paste in your AP ID and enter your name/dob, and you're in the system).

I think if a student can write a well-formed essay over a topic, they can get college credit. In science classes, they could design an experiment and simulate a lab report. CS classes can create a program, etc.

[0]: https://apcentral.collegeboard.org/courses/ap-computer-scien...


I still have yet to hear anyone propose a workable solution for the ultimate problem of remote assessment in CS: hiring someone to just do your work for you. Everyone keeps saying "oh just make it project-based" like I didn't have a student last fall who hired someone to do his final project. I only detected him cause the person he hired sucked and left bread-crumbs; how many students did I not detect because they hired competent people?


Some companies, like Pinterest, use systems like "lytmus" to prevent cheating. It's a monitored VM box where you do the entire assignment inside the GUI VM. I'm sure certain trust factor guarantees prevent you from outsourcing the assignment overseas.

All that said, it's an awful system and I ragequit the take-home midway through. It's buggy, slow, lacks useful hotkeys, and basically requires you to become accustomed to a wiped Linux box without any of the tools you would typically use. Sure, you can install them - but it's a timed assessment, for crying out loud.

Finally, I suppose nothing stops you from logging in with your credentials on your authorized machine and then physically handing that machine to a paid agent.


This is the same reason I offer candidates remote screen share-based technical assessments. Part of what makes an effective engineer is mastery over tools: by putting you into Coderpad or Hacker Rank type tools, I'm handicapping you.

Not everyone opts for that (maybe their local environment is messy, or they only have a locked down work device, etc), but I always make it an option.

I've learned many things over the years watching other experts in their local environments as an interviewer. New packages, new shortcuts, new helper apps, ...


Ha.

I'm a Spacemacs user and I had to do the remote interview for my current gig in Google Docs. That was a bit frustrating.


> I'm sure certain trust factor guarantees prevent you from outsourcing the assignment overseas.

Who said anything about overseas? Hire your classmate to sit down at your own PC to do it. That's the Hard Problem.


I heard some companies, possibly only in India, required a webcam to be on the whole time and for you to rotate it around the room beforehand.


Wilfrid Laurier University in Canada is trying to make the students do this for exams this year.


Yeah and this stops me pre-recording it how? I have a USB capture card that appears to the host machine just as a UVC webcam.


When I took my RedHat RHCSA exam, the proctor was a live human, and I needed to move the webcam in time with their instructions. It was a basic around-the-room pass, but it covered anywhere that I could have stashed materials with which to cheat. From there, even if I did have external materials, it would have been pretty obvious that I was using them because my eyes would be focused offscreen for an extended period of time.

Just the knowledge that another human being was proctoring the exam made the idea of attempting to cheat pretty far fetched. My body language would have given it away, and it wouldn't be easy to transition smoothly from the interactive setup to some kind of pre-recorded loop without it being easily detected. The kind of person who could successfully pull that off... well, could also probably pass the test on their own merits.


I suspect people are making this more complex than it needs to be. For most of these tests the difference between a good score and a poor one are just a few questions.

So, the first step is just to actually take the test. The cheating bit would be having someone monitoring what your doing via a reasonably hi-res hidden camera/etc pointed at your screen/etc. Then all you need is a channel for that person to communicate with you. Be that a tiny bluetooth hearing aid in one ear, a phone/screen taped hidden on your leg/etc or to your existing screen. Basically somewhere not visible to any camera's monitoring you.

So, when the person monitoring you detects you have answered something incorrectly they send you something to the effect "question #5's correct answer is Y because A,B,C". Then later you get a moment of insight and go back and fix the problem. In theory, unless your completely clueless the difference between a 2 and a 4 is just a couple questions so this would only have to happen a couple times during the whole test. Again, unless your completely clueless, if the proctor notices you fixing an incorrect answer and asks "why did you just change your answer" you give them something reasonable sounding. Heck for some of them, it might not even have to be the right answer, its completely possible for someone to get the right answer the wrong way in a multiple choice section. AKA, "I guessed on that one the first time and A & B seemed far fetched so I just picked C over D, but then later I questioned my assumption and decided it was B & C that seemed far fetched so I picked the remaining answer which was D rather than A".

Anyway, I'm not sure the screen is needed. For a test that is 100% multiple choice, a buzzer in your shoe would be enough if it went off whenever you selected a wrong answer, and then pulsed a code to indicate the right answer.


Did they ask for a photo of your laptop? Particularly behind the lid? That’s the one place your webcam can’t see so it seems like it would be the ideal spot to hide or mount a cheat sheet.


Ah, no but the exam was taken at a kiosk, not on my personal machine. I had only a few minutes in the room to get myself situated. The camera was USB, so I moved that around while the laptop remained stationary.


Could you have hidden a cable to another monitor/keyboard in another room?


Ask a few questions, including at least one that's unpredictable.


"Touch your left cheek. Now show me the ceiling. Smile at the camera. Blink three times quickly."

If you can pre-record stuff like that without knowing what the sequence of requested events will be, I'm impressed.


Oooh, it would be cool to deepfake oneself on this. On an intentionally downsampled video feed it could be quite convincing.


While I would agree that the environment is torture to use as a general-purpose screening tool, I’m sure there are jobs where the ability to get work done in a base Linux install is an asset :)


Wow, when did Pinterest implement that? I don't remember anything like that when I interviewed with them, but it was a couple years ago.


2017 through early 2018. I think (and hope) that they've changed the process.


Ah, okay, it was late 2018 when I interviewed there. I guess they got rid of it since.


When your students submit the project, have the landing page give them 10-15 minutes to make one tiny change and resubmit. Pick something that would be trivial to do in seconds for a student who did his own work, but would cause a lot of stress for someone who outsourced.

Save the 1:1 interviews for students whose original program was great but whose modification went down in flames and you'll likely be talking to your outsourcers, plus identifying students who maybe could use some extra help even if they did everything themselves.


Are you prepared on the basis of a student not successfully making the change to fail that student and/or expel them from their university? Sure you can invite them in for an interview, but what happens if the student just denies any accusations.

I am also skeptical you could find appropriate trivial changes to request. Students, and particularly beginner students, may write very unconventional code that may not be as easily modified as a well-thought-out solution.


If its something like modifying a string literal then I can’t see the problem. If the program is too obtuse to permit that then its a major concern on its own.


If it's just modifying a string literal I suspect most would just search for that string and have little trouble even if the work is outsourced. Perhaps within a certain set of requirements there are modifications you could ask for that are not trivial and could be done in fifteen minutes but I think this is pretty hard in the general case and depends a lot on code quality.


I guess you could take those students and review their cases personally by taking to them.


This will work for one semester. The next semester everyone will know about it, and keep their outsources on staff till after grades are given.


> I still have yet to hear anyone propose a workable solution for the ultimate problem of remote assessment in CS

What about 1-on-1 interviews? Ask people the questions face-to-face, ask them to talk you through their answers. That's how advanced degrees like PhDs are assessed.


This is also what my team does for professional interviews. We give them a take-home test with some simple problems, but we actually care much more about how they talk through their solution with us than the solution itself. We don't even care if they copy/pasted stuff from Stack Overflow.

What we care about is that they can demonstrate they understand the problem and solution. The code is just the starting point for that conversation.

We have hired people who submitted failing solutions because they were able to think through the problem on the spot when we told them it failed and asked why.


I'm 1-on-1 interviewing my 59 Algorithms students this semester. 20 minute interviews, and I have 2.5 TAs to help offset the workload. Starts on Wednesday, I'm absolutely dreading it. This would never scale to my 180 Intro students that I'll be getting this fall.


Recruit proctors from big companies.


What are the 59 algorithms?


59 students of algorithms, not students of 59 algorithms, I would guess.


I would definitely take a course entitled "59 Algorithms". What a fantastic idea for a second course in algorithms! Looking at 59 of the most important algorithms (in the opinion of the professor). It's even more intriguing since it's not a round number, so you know the prof didn't add a bunch of filler just to get a nice number.


"59 Algorithms" would be a fantastic book title. I'd buy it.


Yes, thank you for the clarification :)


In my experience, this is definitely the best way to evaluate how competent someone is in an area. If they can easily hold a casual conversation and express / defend some sensible opinions about an area, it's likely they know what they're talking about. Obviously, you should ask technical questions in the interview, but it's the little things in the conversation that say a lot. Unfortunately, this approach doesn't scale to the size of the AP Exams.

Dijkstra exclusively gave oral final exams on a board. Allegedly, he stopped one of his former students after the first question and told him something to the effect of "you clearly know what you are doing by how you answered that, so you'll get an A, but your handwriting is atrocious." He spent the rest of the exam time making the student practice penmanship.


Djikstra - not a professor I would want to emulate.

I'm unconvinced that oral exams provide the most secure and consistent form of assessment. I'm not even sure how to evaluate such a claim, though, so it may have to stand as an opinion.


There are entertainers streaming with appearance-modifier and voice-modifier filters. You can't know you are interviewing the person you think you are.

I don't think there is an off-the-shelf product for cheaters, but it's only a question of time. Especially now that I mentioned it.


I think ideally you should know your students well enough to be able to judge whether they're giving you answers which correspond their ability and their own take on the topic. If you've been tutoring them up to this point in the year you should know them pretty well.


I don't know if you have been to college recently, but I had a handful of intro classes with 200+ students. I never spoke to those professors once.


Those aren't classes. They are class theater.

They probably failed even by the simple arithmetic that the time required to make a minimally good faith effort at applying a grading rubric is greater than the time a teaching assistant is paid to "assist" in the grading.

Your teaching assistants almost certainly scaled back their time grading your assignments to fit their allotted hours. If any of those intro classes fulfilled a writing credit you can probably start with available undergrad writing credit requirements, figure out how many hours your TA got paid, and look back at the number of students for a given TA to figure out just how much money you and/or your parents lost on the deal.


> figure out just how much money you and/or your parents lost on the deal.

If you consider college tuition to be the money you pay to be educated, you're definitely right. Personally I think of college as something like a licensing or certification.

I learned twice as much at my part time programming job than I did in class (but classes where instrumental in getting me through technical interviews). But, once I had my degree I instantly had multiple 100k+/yr offers in a moderate COL area. For me, that is a good deal (but still a huge hassle).


> If you consider college tuition to be the money you pay to be educated, you're definitely right.

If I think of class theater in an education that led me to 100k+/yr, that is a good deal.

If I think of class theater in a certification that led me to 100k+/yr, that is a good deal.

That it is a good deal doesn't change the the fact that class theater is a shady practice.

It's also detrimental. But since we don't have the tech to do speed runs against the versions of ourselves who took non-shady intro classes it can be easy to shrug.


Sadly, certification is more or less our job now in higher education. Wasn't how the system was designed, and isn't what most faculty think about. But it's really becoming accurate.

And I can't do that without proper assessments!


Yes, you would hope so in smaller, more specialised classes. However, in a ~200+ person class, you just won’t have that kind of relationship with each student. Additionally, this completely breaks the anonymous marking that many institutions strive for.


1on1's take too long and don't InternetScale^tm


This is how all exams at the university level used to be conducted. Not really practical however when you have large class sizes.


It's definitely a problem, even on CollegeBoard's current platform. Normal AP exams/SATs require photo ID validation at the exam site, and are run by school admins who will notice if someone is out of place. I work as a tutor on Wyzant and have been asked to "tutor" during the exam as well as to just do CS Principles students' portfolio assignments.

While some might argue my approach is too aggressive, I report every account that messages me asking to cheat to Wyzant as well as the appropriate organization. My goal is to make it simply not worth the attempt to cheat (obviously only if they explicitly ask; I don't report when it's a possible miscommunication). Just about all of them have had their accounts deactivated.

If technology access could be assumed, I don't think it would be unreasonable to ask students to upload a picture of themselves with their photo ID, as well as some sort of validity check, like a unique code only provided to the device they're taking the exam on. Obviously, that can't be implemented because of equity issues. Ideally, a secured assessment would be done on something similar to Pearson Vue, ProctorU, or the multitude of other online proctoring services. Unfortunately, that's not an option due to both scale and technology access (not everyone has a compatible device, I'm sure many students are taking the exam on their phones).


You could make them give a video presentation about their project to show that they actually know how it works, not foolproof but it helps. I think it is pretty much impossible to completely remove that possibility though. I've had lots of courses where the majority of my mark is assignments and projects, and I'm sure some students pay others to do it for them, and the professors just accept that as a possibility. I don't think it is worth providing a worse experience to the majority of honest students to prevent a small minority of dishonest students from cheating.


Do you have them upload the source code of their projects? If so, after the upload could you display the project’s file tree along with a last minute change their hypothetical client has requested and give them a minute or two to select the file(s) that would need a pull request in order to make the change? They don’t even need to make the change, just identify where the change would happen.

Someone who wrote the code themselves will know right away and someone who purchased a project won’t have enough time to sift through the code to figure out the answer.


Well, it's not an easy suggestion, but certainly an interesting one. I'm going to think more about this one. Thanks for providing a new thought on a hard problem!


The problem reminds me of that old story about the group of kids who missed their final exam with a flat tire. If you can figure out the modern CS equivalent(s) of asking your students “which tire” about their project there’s probably an easier implementation. Good luck!


An interesting way to assess your students, if you have time and a reasonable amount of students, is to do what the IB organisation required my Spanish teacher to do: a one-on-one oral exam with presentation and follow-up questions from the teacher, all recorded and sent to the IBO grading board.


I'm 1-on-1 interviewing my 59 Algorithms students this semester. 20 minute interviews, and I have 2.5 TAs to help offset the workload. Starts on Wednesday, I'm absolutely dreading it. This would never scale to my 180 Intro students that I'll be getting this fall.


But doesn't the whole 101 CS classes really not scale in general. I remember having code submission submitted to to some anti-plagiarism system back in the 90's.


I like to the curriculum I'd been developing for the past few years was scaling pretty well. Most of the technology is reaching enough stability that I can give a ton of automated feedback without killing myself, I had a pretty low DFW rate, most students could complete my final programming problems in a proctored exam situation, and evaluations came back high. I wasn't happy with my project rubric scores, but I think I was on the way to progress with that and it was partially just too high expectations (I just want students to test and decompose more). I'm interested in how we can scale introductory computing experiences, and I think I was on a good path before they yanked out my proctored exams out from under me :)


The only way is in-person student code review or in-class projects. You could go over their code with each student and ask poignant questions that can only be answered if the student wrote the code themselves. But I'm not sure how feasible that or the in-class project is for most teachers.

Also, anyone remember the story of the programmer who subcontracted his work to some offshore freelancer for a fraction of his salary? He just sat at his cubicle goofing off all day while the freelancer did all the work.


Could something like pair programming work to reduce the risk of hiring out the whole project?


Reduces the risk, but doesn't prevent it.


I agree to a certain extent on your revised assessment framework, but I’m not sure it would work very well in some of the basic sciences. As someone who has gone to college for these things and beyond, I can tell you there’s very little as a “practicum” that I could have demonstrated coming out of high school or early college. This is in contrast to CS, where if you build a program, the fact that the program exists and solves a given problem correctly is ipso facto a demonstration of understanding. Further, it’s a demonstration of fairly comprehensive understanding of the coursework (you’d have to grasp >80% to have the program work at all), which I have trouble seeing how to do that in say, chemistry.

You propose a lab report, with example data, and I actually really like that idea, but I think it would fail on that last score. Looking back, AP chem is a lot of inorganic reactions (lots of memorization, some principles), acid base work, some organic chem, some lab techniques. It’s fairly scattered. A single (or several) lab reports would not capture all of it, especially given that chemistry doesn’t work perfectly predictably even with a post-graduate understanding. There’s a reason basic science lab courses do boring experiments, it’s because those work reliably.

Anyway, I like the idea, and want to hear more on how to do it, but a lab report would have to be supplemented by something else to really evaluate understanding in the sciences.


AP Chem is definitely an exercise in memorization and general "knowledge of chemistry" for a good portion of the class. Though not a complete solution, a lab report-based assessment could look more similar to the CS Principles exam in another facet: written response.

On the APCSP exam, there's actually very few points for your code itself. You get a point for identifying an abstraction and a point for identifying an algorithm, but the rest of the points (there are 8 total) are assigned based on your written responses. Analytical questions could be added to each exam–broad enough that no two students should have too much overlap, but narrow enough that it's clear what is expected of a student response.

From there, a few points could be assigned for the quality of the designed experiment, but a majority would be assigned based on the quality of the written response questions.

AP Physics follows a similar model on the regular paper exam–you have to prove some theory and design a complete experiment to do so. I believe it also included one light analysis question.


> Further, it’s a demonstration of fairly comprehensive understanding of the coursework (you’d have to grasp >80% to have the program work at all), which I have trouble seeing how to do that in say, chemistry.

Lock someone in a room with some reagents, and expect them to produce extracted/cleaned output products of a certain testable purity and mass after N time. Make the process of getting from the reagents to the final product really involved, requiring at least one of every kind of basic reaction/setup taught in the course. (NileRed's "aspirin to tylenol" sequence might be a good example challenge: https://www.youtube.com/watch?v=b0Ejuew2riA)

You know this is "the practical approach", because this is essentially how gangs qualify their chemists. "If you can really make [illegal drug], then make us some! Without doing anything that'll get the police called! In four hours!"


I had excellent knowledge of Chemistry theory, but I could not for the life of me get the expected results in lab. Like, I know that you should not be able to make a battery out of a lead anode and a lead cathode in distilled water, but I did.

That sort of practicum may be a reasonable test, but could give very different results than the normal written exam.


The entire problem seems to be that the College Board (and ACT and SAT) took an initial design that works fine in a proctored classroom and applied it to a global testing scenario where it doesn't fit.

> If I were to design these assessments, I'd have it be an untimed, synthesis-based assessment...

Yes, but that takes more resources and has more variability among reviewers. It does have the advantages of being more fair for students with time zone issues or closures, and of being durable in the face of temporary website issues. Judging by their decisions, the priorities of the testing agencies seem to have been taking out the cost of the human element of reviewers (multiple choice is easy to grade, synthesis essays need expert reviewers) and the variability of the tests and reviewers to certify their results for schools.


Normal AP tests are half multiple choice and half "free response." What "free response" means depends heavily on the test, but they are always human-graded. This year, the tests are free-response only.


Human graded is a generous way of describing what they do with free response. When I took ap tests a large part of the preparation was knowing exactly how they where going to grade the questions, and optimising answers to hit as many points on the answer key.


Yeah, "free-response" is only true if you're unprepared.

A significant part of taking AP English when I went through was the teacher saying: "Okay, here is the question. Now, what are the likely literature references the AP reader expects you to use? Okay, what are the likely scoring points the AP reader has on their worksheet? Okay, now write to those. Writing anything else is a waste of time."

You could count the number of students out of her class who didn't score a 5 on a single hand.


Wow, none of my AP teachers back in '97 offered anything close to this advice. I wonder if that was normal for the time, or if I just experienced the low end of the variance of preparation.


Possibly we could be considered the high-end of preparation.

The English teacher in question was an actual AP Reader as well as an assistant editor of the local newspaper.


> My brother had an AWS Lambda error message when he tried to access his exam the first time; CollegeBoard had surpassed their Lambda service limits (Fortunately he was able to access his exam after trying again).

This is absolutely hilarious. Did they not realize the concurrency limitations?


I wouldn't be shocked if College Board just signed up AWS ProServe to a big contract.

If not, details in profile, College Board!


On a related note, I've seen multiple posts online about technical difficulties with online AP tests like videos from students trying to press the submit button as the countdown timer is almost to zero and nothing happening. Some claimed they contacted College Board and were told they'd have to troubleshoot their computers during the makeup exam. That seems unreasonable for such an expensive test.


They should have Amazon run the exams. They are the one tech company committed to their services actually working when needed.


They outsource their certification exams to Pearson and PSI. Both platforms are riddled with bugs.


Oh, the education giants are horrible.


Nearly every single one. In college, I probably used close to a dozen different web applications for assignments. Math, chemistry, engineering, biology, Spanish.

I can't remember the name of the web app for my calculus homework, but this was the only one I kind of liked, and ironically it was free. I think it might have been webwork. And in a CS class we used Perusal, which I liked as well, but the grading was an awful black box.

The others cost several hundred dollars, sometimes per semester. I remember the immense frustration of my Spanish professor, who sent an email to their support at least once a week. Their answer was always "We only support Firefox"

That did not stop dozens of students from using Chrome and Safari, myself included. I remember once getting frustrated with the homework, so I wrote a little JavaScript to brute force a question. The security was generally awful.

My fondest memory was using Github to submit assignments, and later as a TA to grade these assignments. Much less friction.

Obviously, I think college professors are looking for a way to reduce their burden of creating assignments and grading, but these education giants put out poor quality apps, knowing they'll generate revenue regardless.

Frankly, I think there needs to be a widespread effort to open source education. Both content and assignments. There is no justification to charge tens of thousands in tuition and top this off with a few more thousand in jank software.


They exist and thrive on what I'll call "frat bro networking." They're able to become gatekeepers to so many things by using their connections to pull strings. Once they establish themselves in a space, they use standard bullying tactics to stay the gatekeeper. That's how textbooks routinely become $1,000 paperweights each year. That's how brands like Varsity become synonymous with a sport's skill bracket. I've personally lost fights with the giants and know a textbook startup founder that was forced to do a 180 pivot by Pearson.


> how brands like Varsity become synonymous with a sport's skill bracket

Is it bad I did not know that was a company's name?


I'm confused, since it's a somewhat old word: https://www.etymonline.com/word/varsity#etymonline_v_4652


I thought it was a corruption of “university”?


Their answer was always "We only support Firefox"

These days, that's better than "only support Chrome", but I get your point --- being browser-agnostic seems to be something a lot of sites, not just learning-related ones, are unfortunately not doing lately.


Depends how old it is. Some only support Internet Explorer.


Open source is never going to be good enough for non-technicals in institutions. They want the support contracts, nice installers, and good UIs. They want an answered RFP which proves that it meets some 200 requirements. They want someone they can just pass a support issue off to.


Open source is not the same as DIY. You can definitely combine open source content and 3rd-party support, though lots of OER's (including e.g. Khan Academy) are sadly licensed under "non-commercial use only" terms that make this artificially difficult, and do not even provide commercial support themselves.


You can, but the open-source types are usually quite fanatical about you not doing so.


mymathlab is the antithesis of good UI


Yep, Pearson makes MySpanishLab as well. Generally one of the most awful UX I'v ever had in a web app.


You have to compare it to GIMP.


As a college student, don’t even get me started on Pearson/Cengage. Terrible UX, horrible glitches, and the “access codes” which only exist to shake down students for money and undermine the used textbook market.


Access codes that cannot be purchased independent of a textbook. The fact that these companies structure their "products" that way bothers me less than the fact that universities allow their students to be exploited by it. Learning that they don't actually care about my personal or financial wellbeing really disillusioned me to the whole college education system.


Using a Pearson-controlled compu-text is a very simple sign that the professor or administration (whoever has decision making power) is terrible and not worth attending.


That’s all well and good until you realize that the class is a graduation requirement and it’s not going to improve anytime soon.


Complain to your university. We outsource grading the weekly portion of homework to Pearson because there is no money for TA's. The alternative would be a fulltime position to administer a LONCAPA instance for the institution, but in the current financial climate that's an impossibility.


Was you just a couple of years ago...


They actually did, many of the errors students have been experienced are with Amazon’s Lambda product.


They mean "Amazon should run" the exams, not "they should have built their exam submission infrastructure using AWS."

Lambda is part of Amazon's Cloud Computing subsidiary Amazon Web Services (AWS). AWS is a utility like DigitalOcean or any hosting provider. It's not some kind of guaranteed success strategy.


This made me smile. AWS is now the new Microsoft in the sense that "nobody got fired for picking AWS" or like the gold plated cables at Big Box Store to the less informed. MoAr BeTtAR.


Amazon Lambda is basically like a part though. I mean contracting Amazon to build the system and operate it. Wasn't the Lambda error a resource limit one?


Funnily enough, I think that they are running on AWS Lambda.


The college board was in a tough spot this year. They couldn't really cancel the exams -- Juniors need them for college admissions, and Seniors need them for college credit.

But they also have to instill trust in the system.

Which led them to them having claims on their website that the test was "uncheatable" and had "the highest level of integrity" while on other parts of their website claiming they had deployed extreme security measures to thwart cheating.

At the end of the day I suspect the colleges will just accept the scores under the assumption that it would be poor form on their part to give the students a hard time about something they had no control over.


> Juniors need them for college admissions, and Seniors need them for college credit.

A position which the College Board has fought tooth and nail to be in for the last few decades!


College Board was a primary scourge of my education. How a single private company could get its dirty fingers into public education beats me to this day.

And please, don't come at me with "they're a non-profit", last I looked in 2014 they had made $750 million, and I'm sure they pay their top executives _very well_ to say the least.


I'm sorry this is just not true. They weren't in a tough spot at all. People who have to go and work in a warehouse for minimum wage with no PPE are in a tough spot.

>But they also have to instill trust in the system.

Yes, and I have to rob a bank to demonstrate I'm rich.

What you're saying is that they're in a tough spot because they can profit from lying and chose to do so.


This is extremely combative and any message is lost through the tone.

The first is a logical fallacy and the second is conspiratorial. What was the alternative, not offer exams?


If their exams can't be trusted they shouldn't be trying to instill trust in their exams. What you are suggesting is literally that they should commit fraud to hide their incompetence. The fact is the system they've created can't be trusted. So no, it's not an imperative for them to make people trust the system, it's the opposite.

We all know full well that they aren't in a position to run the exams correctly, and yet rather than accepting that and working on ways to move forward without the exams, we've decided a consensual hallucination that the exams were fine is the best option. Oh, and screw the kids who couldn't submit the exams. It's more important that the the college admissions board has an easy life. And as if that's not crazy enough, the examiners have decided to try and entrap the students. If you don't see how insane that is - that more time is being spent trying to entrap sutdents, than to be honest about the veracity of the test results, you've really fallen off the deep end.


Are colleges even going to accept high scores on these exams for credit / advanced placement?

It's a shame the college board decided to go through all this instead of refunding the fees they collected from students, especially when the value proposition on the students' side is uncertain.

Add in behavior like "posting content to confuse and deter those who attempt to cheat" when internet research during the exam is not actually a violation of their rules, and the organization starts to look fairly predatory.


Do universities have a choice?

My local school system is guaranteeing students that their grade will not be lower than when they closed schools. Many classes had not yet had any kind of exam, so they have 95% averages.

Universities obviously believe that any grades from the semester are now worthless and some have said so. But what are they going to do? Not accept any local students?

Every metric you might use to admit students has been tainted by this COVID business.


I think universities are necessarily going to rely more on the "soft" aspects of applications to select students. It won't be a problem for top colleges - they could probably toss out the entire admitted class twice over and still select a world-class pool of students from the application pool.

I think it might be problematic for T2-T3 schools and state schools that are generally lenient when accepting AP credit. As for graduate and professional schools... that's going to be a mess. Harvard Medical School will only consider COVID Pass/Fail grades if colleges make those grading schemes mandatory for students; otherwise, students will have to submit letter grades. That's (1) completely heartless and (2) an arbitrary advantage for students who went to schools where P/F is mandatory.


Presumably a lot of admittance decisions have already been made. These are about AP exams--basically placing out of college courses--so not a huge deal overall (especially at more selective schools).

Honestly, the bigger issue for a lot of students at the moment is the uncertainty over the degree to which schools will open up for physical classes in the fall. What makes sense under those circumstances? And, of course, a lot of the usual gap year options aren't going to be available either.


The issue is for applications next year when these courses should have been at the centre of applications.


Those kids could retake normal proctored tests November, assuming the world stops burning and CB seta up testing.


Those "soft" sides of the applications are even easier to game than cheating on a home test. Virtually NONE of it is ever verified.

...and I wouldn't worry about the top colleges. Having worked in admissions at one of them - a majority of the students are there due to family connections.


I think we are going to see that it means nothing.

Andy Grove got his education at City College when it was open admission.

Missing a semester of grades isn’t going to ruin SUNY Stony Brook it whatever.


Andy Grove got his degree in 1960. City College went open admission in 1969. City College was tuition free, but fairly competitive.

(12 Nobel Laureates graduated from City College before it went open admission, none since.)


> Do universities have a choice?

Anecdotal, but I do remember being told that every state school in Texas is required to offer credit for every AP exam score >= 3. Doesn't have to be useful credit, mind you, but it has to be credit.


I believe the College Board surveyed universities on whether they’d take these virtual exams for credit. Most institutions said yes.

edit: really? respond instead of downvoting.


Cambridge International Examinations (UK based high school board for O/A levels) is giving students grades on their O/A levels based on the grades they got in their last few internal school exams [1].

Universities will pretty have to accept them if they want a reasonable intake of students.

[1] https://www.cambridgeinternational.org/covid/


In the US, AP tests aren't really used for admissions—that primarily falls on the SAT/ACT, GPA and the more subjective things (essay, rec letters, etc). AP tests mainly are just for the college credit.


Some schools don't just accept the scores as-is. After getting a 5 on AP Chem, I had to pass another entrance exam and take a freshman organic chem class to get the AP credit


I think the most likely scenario will be either additional testing administered by the university like you experienced or generic credits towards a degree with a requirement that students still take Chem 101 or whatever.


At what point do we stop wasting time and energy trying to build better tools to catch cheaters and start focusing on building a better system of education and assessment?

If a kid can cheat, pass, get a degree and go out and not fail in the real world, what does that say about the validity of the tests? Even futher, what does that say about the world we live in?


> If a kid can cheat, pass, get a degree and go out and not fail in the real world

I've known many cheaters (they tend to brag about it). The ones that do good work don't need to cheat, and the cheaters don't stop cheating once they graduate.

Who'd you rather work for / hire / trust with your money:

1. a cheater

2. not a cheater

?


In some countries it's pretty normal and expected for students to cheat in school, but after they find a job where they are treated with trust and respect and given real-world problems to solve, absolute majority will stop playing tricks.


If they have to cheat to solve the toy academic problems, how are they going to solve real problems?


It is simple - college education there is viewed (and I am not sure it is undeserved) as just part of the process to get papers to be employable. I do not know first-hand how useful knowledge received at US colleges is, but I certainly believe it is more of a theatrical show in some other countries with elements of burning in a "good citizen firmware" into students' heads.


I should've given more context. Based on my anecdata in post-Soviet world, ~50% of graduates choose a job completely unrelated to their degree (e.g. a chemist becomes web designer, a linguist becomes QA engineer, and economist becomes barista). They probably realize this during first 1-2 years in college, but they carry along because it's just easier this way, they made friends with classmates, and education is free or cheap.

The point is, they don't care at all about the academic problems because it's (a) not interesting to them and (b) usually professors don't motivate students with real-life examples. E.g. while microeconomics is very much relevant for everyone, the way it was taught to us, it felt insanely boring and inapplicable to real problems.


They become managers and hire other people to solve the problem.


Well, toy academic problems often have no relevance to real problems.


Yeah, they're a lot simpler.


Model planes are a lot simpler than real ones, often to the point where they are completely different to work with.


Model airplanes have Center of Gravity, Center of Pressure, stability, lift, drag, weight and thrust - just like real ones.

If you have to cheat to design model airplanes, you have no business designing real ones.


> In some countries This is very vague. I question the validity of this argument.


I can only vouch for post-Soviet states, but it's supposedly also widespread in India and China (and who knows where else). I think we inherited it from Soviet era when hard work and imitation of hard work would be compensated the same.


Even if the first part were true - this part may not be...

> ...absolute majority will stop playing tricks.


Looks like they're not preparing for it well, but I'm sure there's going to be more cheating than usual this time around. I bet there will be some oddities in score distribution and students with unimpressive SAT score who suddenly nailed AP Calculus.


AP Calculus would probably be one of the least productive tests to cheat on. So you place out of the first semester of Calculus at college in spite of not really knowing the subject. Now it's the second semester calculus class. How's that going to go? To say nothing of other STEM classes that depend on math.


You’re thinking like an adult, not a teenager.


Plenty of majors only require one semester of calculus.


At my alma mater passing AP Calc is enough to test you out of your math requirements entirely for Architecture, for example.


CS major at a state school here.

I got to skip 2 semesters of calculus based on my 5 in AP Calc BC.

But yeah I’m kind of ticked off that these virtualized exams (which are trivial to cheat on) are being counted for college credit.


Depends if you need it later. Plenty of people (such as those in business school) might have to take one semester of calc. This way, you get it out of the way completely.


If your school was like mine, they only required 2 or 3 semesters of math or science courses for people in arts programs.

Skip calculus 1, then you can take statistics or discrete math and then maybe biology to round out your requirements.

So long as you didn’t take physics, you wouldn’t hit anything that required calculus in science.


> How's that going to go?

Ya keep on cheating, and explain why you're justified in cheating as college doesn't mean anything anyway.


People who cheat on the AP Calculus test just know no limit. They’ll just put their derivative answers on the test. Then in Calc II, cheating remains their integral strategy — they’re all-in, not just by parts.


I think Calculus is actually one of the most robust tests to googling. You either understand the material well enough to apply it, perhaps with a quick reminder of eg a particular trig substitution, or you don't. Using my previous example, you're not going to learn either recognize a trig substitution is appropriate or how to apply a trig substitution in the 3 minutes or whatever you have per question.

edit: NM, I forgot wolfram alpha. oof.


AP Calculus allows you to bring a graphing calculator, and one of the approved calculator was the TI Nspire CAS, which has a very sophisticated computer algebra system that can do a lot of the things wolfram alpha can - like calculating the antiderivative. So you might be overestimating how much advantage wolfram alpha can confer :)


There are certain sections where calculator use is not permitted, however.


It’s a single FRQ, which must be completed in 45 minutes and you get a calculator. I think if it’s a typical AP calc question, you’d have enough time to reference stuff.


But take-home tests are some of the least robust to pinch hitting.


They are sending our scores and responses to our teachers, and I am sure that most teachers will report a student with a C in class who then gets a 5 on the AP exam.


The AP exam is a second chance for kids who grasped the material but didn't get great high school grades for whatever reason. I know a kid who could very well get a C in class and a 5 on the AP exam. The class is too easy for him.


These were my AP Calculus teacher’s AP score distributions for his students when I went through his class: http://drootr.com/calc/content/BCScores/BCData.html. Grades usually averaged around a B/B+, so many students with Cs were getting 5s on that exam.


Unlikely. Most AP teachers are rated on their AP pass rate. They have every incentive in the world to help you cheat.

That's why they don't let your teacher proctor your exam unless they have to.


A teacher who fails to report their cheating student seems to be taking on some personal risk…



In the information age, we are going to need an alternative to testing merely information in an easy to grade way.


Even if they are, so what? Sure, time would probably be better spent making their online testing actually work. However, anti-cheating is always something these types of tests prepare for whether that's exam room monitoring or trying to protect their online exams. Setting up a honeypot to get students seems exactly like the lazy method these people would attempt.


Why is setting up a honeypot lazy? If they had been good at it it would have worked flawlessly.

Just disqualify anyone who copy/pastes answers from the known shared workspace.


I'm more curious how well the 'we made a whole new batch of tests under emergency time pressures so that students won't benefit from googling' claim holds up, I mean google certainly did its part here during the last decade by becoming less and less useful for finding answers to non SEOed questions and yet...


As a student who took 3 AP tests this week, it holds up well. The main weapon CollegeBoard leveraged is that these exams were only 45 minutes long.

For example, AP Calculus had two questions: A 25-minute, and a 15-minute question, each with 9 or 10 parts. There simply just isn't enough time to Google anything. If you look at Google Trends data [0] you can see hilarious spikes in related terms during the exams, but if you didn't have a good handle on the material then you would just run of out time.

[0]: https://trends.google.com/trends/explore?date=now%207-d&geo=...


Is there a sample questionnaire online ? What about someone using Wolfram Alpha etc. ?


https://apcentral.collegeboard.org/pdf/ap-2020exam-sample-qu...

The exams are designed with algebraic solvers in mind and typically, you don't actually find derivatives and integrals. Instead, they'll show you a graph or table instead of the equation itself. Typically students are allowed to use a TI N-spire CAS CX on their exams, which has a lot of the same capabilities as WolframAlpha.


Why does the exam throw out all the student's answers if they fail to click the final button? It should just record each answer as they go. Then, if their computer dies or their Internet connection fails, at least they can get graded for the questions they successfully answered so far.

The UX design is broken.


Found the below on the CollegeBoard website today [1]. Only effective for the second week of AP exams.

> Students with an unsuccessful submission will see instructions about how to email their response on the page that says, “We Did Not Receive Your Response.” The email address that appears will be unique to each student.

[1]: https://apcoronavirusupdates.collegeboard.org/faqs


If you think abstractly about pedagogy as a social science, it's very obvious from seeing how much education is struggling with cheating that the field has had zero progress in the past century. We need to stop conflating social development, socialized childcare, and education systems.

Here's my shitlist:

* Children should attempt hard problems that they can make progress on, but probably cannot solve. It is wrong that schools give nearly zero exposure to truly hard problems, and plenty of exposure to trivilally solvable problems. And no, making them tease out the meaning of a word-problem doesn't qualify as difficulty. I mean give them a problem that makes them find a wikipedia page and learn on their own.

* Computers are here to stay. I remember when I was little, I was told that I wouldn't always have a calculator on me. That is demonstrably false, I literally do not go anywhere without a computer in my pocket. Furthermore, that computer has a WolframAlpha app that can interface with server clusters running the most advanced computer algebra systems in existence. Why did I learn stoichiometry? There is something deeply wrong with education if students could easily pass an AP Physics exam with access to WolframAlpha, and will (probably) have access to WolframAlpha in every practical application of their AP physics education, but are artificially prevented from having access during the exam. Why don't the tests correspond to real-world application of the subject? Oh, it's more convenient for you to evaluate it this way? That's tough shit, figure it out.

* Students should have an entire day dedicated to math, and then they should have an entire day dedicated to English, etc. Some students might get bored and tired, sure, because sometimes learning isn't pleasant. But over and over on this site we hear that successful developers are the people who are able to maintain focus for long periods of time on difficult problems. Why do students waste 35 minutes a day switching classrooms?

* There is not enough differentiation of students. Advanced students have so much of their time wasted. The advanced 3rd graders should be hanging out with advanced 6th graders.

The way that the education system is collapsing from cheating and going online-only is nothing short of exciting to me. A lot had to go wrong for us to get to this point -- answer banks exist because everyone uses the same textbook and the same curriculum. Cheating with other students exists because group-work is the natural state of humans. Students aren't excited about the work they do because they sense that there's a magical oracle on the internet with the solution to the simple problems they're solving.

It's all going to fall apart and that's something to celebrate. The thing that I'm most afraid of is that Americans have lost the ability to take risks at aggregate and think radically about redesigning institutions. All of these problems that I'm talking about are just as bad in every other country in the world. Some country is going to figure it out and they're going to reap the rewards, and I'm afraid that America isn't brave enough anymore to be the first.


I remember when I was little, I was told that I wouldn't always have a calculator on me.

This turned out to be false, but being able to do mental arithmetic is still an extremely valuable skill. The usability of your pocket calculator is really bad in comparison, and what the mental arithmetic gives you is a general automatic numeracy where you can ballpark the answer to numerical problems much faster than using a tool.

This becomes a qualitative difference because you'll do more of those mental checks as a matter of course that you would never pull out a calculator for. Kind of like how git made merges so painless that they're now a standard part of most people's development flow. That's the kind of qualitative impact that mental arithmetic can give you.

I admit that this is a point that's difficult to get across, and schools generally don't do a great job of it...


yes, being able to calculate the tip for your bar tab without needing to dig into a purse to find your "calculator" is a nicety. Being able to calculate the savings from the 20% sales price is also a useful trick. My favorite is just being able to round up the prices of items at the grocery store, and keep a running total in my head. I know I'm a nerd, but the concept calculating the tip by doubling the value after moving the decimal place one position to the right of the total flabbergasts me on how difficult it is for others to grasp. That's like 6th grade level math (at least it was when I learned it).


Agreed! To build on your point - I used to teach Algebra 1 in a Title 1 school. What many people don't realize about "memorizing math facts" in elementary school is that if someone doesn't have a basic level of numeracy, it's really hard to learn more advanced concepts (like solving for x in a simple equation). You can't understand solving for x if you don't fundamentally understand how two numbers change when I subtract 5 from each of them, and you also don't have free working memory in your brain for learning if you're using all your working memory to do the subtraction. It's an anecdotal observation, but it's something that isn't often talked about in conversations about "memorizing facts".

I guess what I'm trying to say is it's useful to be able to calculate a tip without a calculator or tally a grocery bill. But more importantly, it'll be a big hurdle to learning slightly more advanced math concepts if you can't do some of the basics in your head.


Don't you feel like you would have improved at this regardless of whether your schooling put a concerted effort into it? I've forgotten a lot of things from school -- doing math in my head isn't one of them because I use it all the time. Is that evidence that it's a good investment of time for 1st graders, or is it evidence that you would have learned it anyway due to the utility of it?


> something deeply wrong

If you do not learn things manually, you'll never develop a feel for answers that are right and answers that are wrong. I've seen too many people accept Garbage Out from a computer as The Word From God, never realizing that they'd put Garbage In, the computer program was buggy, or the computer program was not the right model for their particular problem.


I once had a patio installed. It was to be elliptically shaped, and the charge was per square foot. The contractor staked it out, and ordered the materials.

It didn't look quite right. I finally measured the major and minor axis, and computed the area. Turns out it was 30% smaller than the square footage quoted. The bricks were stacked up, and a little math gave me the square footage of the bricks. It was also 30% less than the quoted square footage.

I brought it up with the contractor that the size seemed less than the quoted size. He said he'd made a mistake staking it out. Obviously, he had not, else he'd have had too many bricks.

He did a good job anyway, and I paid him 30% less than the quoted amount.

I suspect he had realized long ago that his customers were math challenged, and weren't noticing his oversized quotes. So he kept increasing the gap until it got to a ridiculous 30% overage.

My advice to people who feel that math is a pointless imposition on your time, you're going to pay for not learning it. And you'll never realize you're being rooked.


> My advice to people who feel that math is a pointless imposition on your time, you're going to pay for not learning it. And you'll never realize you're being rooked.

I really hope that's not what you took away from my comment.


It's more of a general comment to people who justify cheating, complain about math requirements, and/or look for ways to avoid dealing with math.

It's relevant to why learn the mechanics when a computer can do it for you.


AP Physics (at least the algebra-based exams, never took calc based), is designed to be entirely theoretical. You are allowed a calculator, but I think I touched it once in the three hours. Everything used variables.


Honeypotting minors is so unethical I don’t know where to begin. These kids’ futures could’ve been completely fucked


It also bothers me that they cancelled exams of students they said had planned to cheat. The College Board should have given them an opportunity to take the exam and see if they participated in the cheat chats before cancelling their score. In my mind thinking about cheating, researching ways to cheat, and even planning on cheating isn’t the same as actually cheating. Anxious kids who are studying do all kinds of dumb stuff and joining a subreddit of potential cheaters doesn’t necessarily mean the student is a cheater. You have to give someone - especially a kid - the opportunity to do the right thing.


Yeah. What if those kids just joined the subreddit for fun, and didn't actually plan on using it?


While honeypotting teenagers does seem kind of fucked up, it's not the honeypotting that would have been screwing over the kids trying to use reddit to cheat on their exams. That's very much a "hoist with their own petard" scenario.


These are Advanced Placement tests for college credit. Nothing future fucking about it.


Cheating in higher education is a serious offense. Think loss of scholarships, failed classes the kill your gpa, expulsion.


Tests should have been stretched in time and spread across classrooms to have smaller groups of people to practice social distancing, not be administered online; it is a cheatfest.


r/entrapment


No, making something available isn't entrapment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: