skipToContent
United KingdomHE edtech

AI is the calculator of our era

LSE Higher Education Blog United Kingdom
AI is the calculator of our era
With AI now a firm part of the HE landscape, Ignacio Aravena Gonzalez shows how he has changed assessment in his real estate finance course A familiar story sits at the origin of many technology debates in education. In Plato’s Phaedrus , Socrates worries that writing would create the appearance of wisdom without genuine understanding. Two millennia later, engineering schools fought over calculators and then computers. When slide rules disappeared, curricula shifted towards modelling and judgement. Students still needed numerical fluency, but assessment stopped rewarding arithmetic and started testing the ability to structure thinking and interrogate computational tools. Generative AI has revived this debate, but with a twist. It does not just accelerate calculation; it drafts reports, recommendations and presentations that used to sit in the professional’s domain. Perhaps this is why there have been debates between those who treat generative AI tools as a problem and those who advocate for its use in higher education. Others argue that AI and assessment is better understood as a wicked problem without a correct solution, and any solutions will require iteration over time. The real risk is not that students will use AI; but that educators continue to grade outputs that can be generated by AI while pretending they still certify human competence. Despite these debates, one thing is sure: by disrupting traditional assessments, AI forces us to think about what we treat as evidence of learning. If we keep using the same tasks, assessment validity degrades. However, if we redesign tasks incorporating AI in the space between output delivery and students’ judgement and reflection, assessment can become better aligned without having to go back to handwritten submissions. The real risk is not that students will use AI; but that educators continue to grade outputs that can be generated by AI while pretending they still certify human competence. In teaching real estate finance, I have already observed how students use AI in their calculations. However, in some cases, they are not aware of the internal inconsistencies between AI-based assumptions and outcomes. I can illustrate this with a concrete example from real estate teaching at the Master’s level, in this case in underwriting. A robust viability exercise asks students to study, analyse, and report whether a proposed real estate project is financially viable, and under which assumptions that conclusion holds. In practice, this means building a discounted cash flow (DCF) model and financing structure, translating market evidence into assumptions about rents, costs, and yields, stress-testing those assumptions through scenario and sensitivity analysis, and turning the numbers into a defensible investment recommendation. For that reason, viability analysis sits in the domain of professional judgement under uncertainty. Because it combines multiple technical components into a single workflow, generative AI is especially tempting in this setting. Generative AI can draft an investment memo, sketch a DCF, propose sensitivity analyses, summarise market reports, and flag inconsistencies across documents – all in under an hour instead of the usual three to five days it used to take students to complete this assessment. In the professional industry, workflow acceleration is already marketed as mainstream practice compressing a two days’ long process into an hour. Sector research similarly positions AI as a driver of efficiency gains across real estate tasks, including valuation support and risk identification. If we accept this uncritically, the conclusion seems obvious: if AI can produce the viability report, the assignment is dead. But coming to that conclusion repeats Socrates’ error if it assumes that what we previously graded was the thing that mattered, rather than a proxy that worked because it was complex and costly to produce. The jagged frontier If AI disproportionately helps novices, it may compress visible performance differences while masking gaps in understanding. Labour economics has long distinguished between routine and non-routine tasks , where technology complements human skill. Generative AI disrupts this distinction because language and analytic work, previously non-routine, has become partially automatable. A study published in 2023, in a field experiment with Boston Consulting Group consultants, found that AI improves user performance on tasks inside its capability frontier, but on tasks outside it, users perform worse because they over-trust the tool. Other studies confirm this conclusion: 40% faster completion and 18% higher quality writing gains concentrated among weaker users , and find comparable dynamics in customer support, with especially large gains for novices . If AI disproportionately helps novices, it may compress visible performance differences while masking gaps in understanding. Translated into viability analysis, AI can generate a clean-looking model even when assumptions are incoherent or the risk narrative is weak. Worse, it can make a recommendation sound more persuasive , especially when the underlying analysis is faulty – a form of outsourcing that has replaces understandin g rather than supporting it. The issue for the instructor is not that AI can perform viability analysis, but how can a student demonstrate understanding or competence, when AI can generate the first draft? If viability analysis is a professional workflow, then competence is not producing a spreadsheet; instead, decision-making is the competence students need to demonstrate, which is also a managerial skill. Decision-making skills require modelling literacy, verification of data, scenario design under uncertainty, and ethical practice. This aligns with industry guardrails emphasising hallucination risk and with calls for understanding and discussing technological limitations rather than uncritically adopting tools. Assessing the right skill I agree that AI might diminish the value of traditional assessment, which can lead us to resist take-home assignments in favour of oral or invigilated in-class assessments. However, the implication is not to ban AI from viability analysis, but to change what we grade. Although reverting to handwritten exams may restore authentication, for applied professional degrees it risks measuring the wrong skill. Viability analysis in practice is a professional workflow involving tools, documents, and iterative verification that should be reflected in assessments. I propose two redesigns. First, grade the audit, not the artefact. Rather than marking the spreadsheet itself, assess the student’s ability to interrogate and verify it. This mirrors introductory econometrics courses, where students interpret and critique regression tables rather than generating codes. Applied to viability, students would receive an AI-generated model and be asked to identify errors, justify corrections, and reconcile assumptions with market evidence. Second, introduce outside-the-frontier reasoning. Include a deliberately tricky element, such as a planning constraint that alters timing, or lease covenants that reshape the cash-flow structure, where AI is likely to be wrong or weighted differently. The learning outcome becomes ‘identify and rectify’, not ‘generate’. This directly targets the over-reliance failure mode in the jagged frontier evidence . These two approaches, anecdotally, have proven to me to be a reasonable approach with my students. During reading week, we used generative AI to test successful and non-successful DCF valuations over a two-hour workshop. By the end, the students understood how prompt engineering can enhance their use of AI, while, at the same time, they saw how subtle mistakes can lead to errors that, in practice, can cost millions of pounds. …the response is not necessarily to prohibit AI, but to redesign the task by using AI outputs as material for critique, reflection, and improvement. This intuition is not limited to quantitative disciplines; another post on this blog argues that if a standard essay prompt can be answered by an AI, the response is not necessarily to prohibit AI, but to redesign the task by using AI outputs as material for critique, reflection, and improvement. What do we value? When we say AI changes the paradigm, what we often mean is that AI makes a previously costly and advanced skills cheap and easy to perform. This is not the end of higher education, it is a prompt to clarify what we really value. For viability analysis, the value is not spreadsheeting; it is judgement, verification, and accountable decision-making under uncertainty. If we redesign assessment around those competencies, AI becomes the calculator of our era by removing the friction that used to masquerade as learning and forces us to measure what we claim we care about. The harder work is not to ban or celebrate, but to redesign the standard for competence. Main image: Räknesticka, Jamtli, Sweden on Europeana (CC BY-NC-ND). Note: A version of this post first appeared on 6 March 2023 on the Contemporary Issues in Teaching and Learning Blog , part of the PGCertHE programme at the LSE. This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. The post AI is the calculator of our era first appeared on LSE Higher Education .
Share
Original story
Continue reading at LSE Higher Education Blog
blogs.lse.ac.uk/highereducation
Read full article

Summary generated from the RSS feed of LSE Higher Education Blog. All article rights belong to the original publisher. Click through to read the full piece on blogs.lse.ac.uk/highereducation.