Player FM - Internet Radio Done Right
Checked 4d ago
Ajouté il y a deux ans
Contenu fourni par Oracle Universtity and Oracle Corporation. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Oracle Universtity and Oracle Corporation ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !
Mettez-vous hors ligne avec l'application Player FM !
Podcasts qui valent la peine d'être écoutés
SPONSORISÉ
S
Smart Travel: Upgrade Your Getaways

1 Cash or Miles? The 2025 Points Valuations That Could Change How You Travel 42:36
42:36
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé42:36
Get fresh NerdWallet data on what your points and miles are really worth in 2025 — and why the answer might change how you book travel. Should you take a $650 flight voucher or 32,500 miles? How much are your points and miles actually worth? Hosts Sally French and Meghan Coyle break down the latest NerdWallet valuations to help you make smarter redemption choices. But first, they cover the week’s biggest travel headlines, including Southwest Airlines’ new partnership with EVA Air, Frontier’s companion certificate promotion and status match, and JetBlue opening up award bookings with Condor Airlines. Then, travel Nerd Craig Joseph joins Meghan to discuss NerdWallet’s latest airline, hotel, and credit card point valuations, with tips and tricks on comparing loyalty programs, maximizing transfer partners, and deciding when cash is more valuable than points. They also discuss the impact of devaluations, how close-in bookings can save you points, and why premium cabins can sometimes offer outsized redemption value. Plus: Craig’s hot take on airport lounges. Card benefits, terms and fees can change. For the most up-to-date information about cards mentioned in this episode, read our reviews: Is the Frontier Airlines World Mastercard Worth Its Annual Fee? https://www.nerdwallet.com/article/travel/is-the-frontier-airlines-world-mastercard-worth-its-annual-fee Citi Strata Credit Card Review: Solid Rewards for No Annual Fee https://www.nerdwallet.com/reviews/credit-cards/citi-strata Citi Double Cash Review: A Solid Choice for Everyday Spending https://www.nerdwallet.com/reviews/credit-cards/citi-double-cash Citi Custom Cash Card Review: Low-Maintenance 5% Cash Back https://www.nerdwallet.com/reviews/credit-cards/citi-custom-cash Citi Strata Premier: Big Rewards Across Top Spending Categories https://www.nerdwallet.com/reviews/credit-cards/citi-strata-premier Is the New Alaska Atmos Summit Card Worth a $395 Annual Fee? https://www.nerdwallet.com/article/travel/is-the-alaska-airlines-atmos-summit-card-worth-its-annual-fee Resources discussed in this episode: Airline Miles vs. Cash Calculator https://www.nerdwallet.com/article/travel/calculator-should-you-book-a-flight-with-cash-or-miles How Much Are Travel Points and Miles Worth in 2025? https://www.nerdwallet.com/article/travel/airline-miles-and-hotel-points-valuations Want even more tips and tricks to get the most out of your travel dollars? Subscribe to TravelNerd , our free newsletter designed to help you crack the code on spending less on your travel. In this episode, the Nerds discuss: points and miles valuation, airline miles value, hotel points value, credit card points value, Southwest EVA Air partnership, Frontier Companion Certificate, JetBlue Condor award booking, Citi American Airlines transfer, Amex Membership Rewards value, Capital One points value, Bilt points value, Hyatt points value, Hilton points value, Marriott points value, Wyndham points value, IHG points value, Alaska miles value, JetBlue points value, American Airlines miles value, United miles value, Southwest points value, Virgin Atlantic miles value, ANA miles value, Avianca LifeMiles value, best way to use Amex points, best way to use Citi points, best way to use Capital One points, use cash or points for flights, last minute award flight value, premium cabin redemption value, economy flight points value, airline devaluation, hotel point devaluation, cash vs points travel booking, when to transfer credit card points, how to maximize travel rewards, and NerdWallet points and miles calculator. Learn more about your ad choices. Visit megaphone.fm/adchoices…
Oracle University Podcast
Tout marquer comme (non) lu
Manage series 3560727
Contenu fourni par Oracle Universtity and Oracle Corporation. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Oracle Universtity and Oracle Corporation ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
…
continue reading
142 episodes
Tout marquer comme (non) lu
Manage series 3560727
Contenu fourni par Oracle Universtity and Oracle Corporation. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Oracle Universtity and Oracle Corporation ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.
…
continue reading
142 episodes
Tous les épisodes
×O
Oracle University Podcast
1 Networking & Security Essentials 17:25
17:25
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé17:25
How do all your devices connect and stay safe in the cloud? In this episode, Lois Houston and Nikita Abraham talk with OCI instructors Sergio Castro and Orlando Gentil about the basics of how networks work and the simple steps that help protect them. You'll learn how information gets from one place to another, why tools like switches, routers, and firewalls are important, and what goes into keeping access secure. The discussion also covers how organizations decide who can enter their systems and how they keep track of activity. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! In the last episode, we spoke about local area networks and domain name systems. Today, we'll continue our conversation on the fundamentals of networking, covering a variety of important topics. 00:50 Lois: That's right, Niki. And before we close, we'll also touch on the basics of security. Joining us today are two OCI instructors from Oracle University: Sergio Castro and Orlando Gentil. So glad to have you both with us guys. Sergio, with so many users and devices connecting to the internet, how do we make sure everyone can get online? Can you break down what Network Address Translation, or NAT, does to help with this? Sergio: The world population is bigger than 4.3 billion people. That means that if we were to interconnect every single human into the internet, we will not have enough addresses. And not all of us are connected to the internet, but those of us who are, you know that we have more than one device at our disposal. We might have a computer, a laptop, mobile phones, you name it. And all of them need IP addresses. So that's why Network Address Translation exists because it translates your communication from a private IP to a public IP address. That's the main purpose: translate. 02:05 Nikita: Okay, so with NAT handling the IP translation, how do we ensure that the right data reaches the right device within a network? Or to put it differently, what directs external traffic to specific devices inside a network? Sergio: Port forwarding works in a reverse way to Network Address Translation. So, let's assume that this PC here, you want to turn it into a web server. So, people from the outside, customers from the outside of your local area network, will access your PC web server. Let's say that it's an online store. Now all of these devices are using the same public IP address. So how would the traffic be routed specifically to this PC and not to the camera or to the laptop, which is not a web server, or to your IP TV? So, this is where port forwarding comes into play. Basically, whenever it detects a request coming to port, it will route it and forward that request to your PC. It will allow anybody, any external device that wants to access this particular one, this particular web server, for the session to be established. So, it's a permission that you're allowing to this PC and only to this PC. The other devices will still be isolated from that list. That's what port forwarding is. 03:36 Lois: Sergio, let's talk about networking devices. What are some of the key ones, and what role do they play in connecting everything together? Sergio: There's plenty of devices for interconnectivity. These are devices that are different from the actual compute instances, virtual machines, cameras, and IPTV. These are for interconnecting networks. And they have several functionalities. 03:59 Nikita: Yeah, I often hear about a default gateway. Could you explain what that is and why it's essential for a network to function smoothly? Sergio: A gateway is basically where a web browser goes and asks a service from a web server. We have a gateway in the middle that will take us to that web server. So that's basically is the router. A gateway doesn't necessarily have to be a router. It depends on what device you're addressing at a particular configuration. So, a gateway is a connectivity device that connects two different networks. That's basically the functionality. 04:34 Lois: Ok. And when does one use a default gateway? Sergio: When you do not have a specific route that is targeting a specific router. You might have more than one router in your network, connecting to different other local area networks. You might have a route that will take you to local area network B. And then you might have another router that is connecting you to the internet. So, if you don't have a specific route that will take you to local area network B, then it's going to be utilizing the default gateway. It directs data packets to other networks when no specific route is known. In general terms, the default gateway, again, it doesn't have to be a router. It can be any devices. 05:22 Nikita: Could you give us a real-world example, maybe comparing a few of these devices in action, so we can see how they work together in a typical network? Sergio: For example, we have the hub. And the hub operates at the physical layer or layer 1. And then we have the switch. And the switch operates at layer 2. And we also have the router. And the router operates at layer 3. So, what's the big difference between these devices and the layers that they operate in? So, hubs work in the physical layer of the OSI model. And basically, it is for connecting multiple devices and making them act as a single network segment. Now, the switch operates at the data link layer and is basically a repeater, and is used for filtering content by reading the addresses of the source and destination. And these are the MAC addresses that I'm talking about. So, it reads where the packet is coming from and where is it going to at the local area network level. It connects multiple network segments. And each port is connected to a different segment. And the router is used for routing outside of your local area network, performs traffic directing functions on the internet. A data packet is typically forwarded from one router to another through different networks until it reaches its destination node. The switch connects multiple network segments. And each port of the switch is connected to a different segment. And the router performs traffic directing functions on the internet. It takes data from one router to another, and it works at the TCP/IP network layer or internet layer. 07:22 Lois: Sergio, what kind of devices help secure a network from external threats? Sergio: The network firewall is used as a security device that acts as a barrier between a trusted internal network and an untrusted external network, such as the internet. The network firewall is the first line of defense for traffic that passes in and out of your network. The firewall examines traffic to ensure that it meets the security requirements set by your organization, or allowing, or blocking traffic based on set criteria. And the main benefit is that it improves security for access management and network visibility. 08:10 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started. 08:36 Nikita: Welcome back! Sergio, how do networks manage who can and can't enter based on certain permissions and criteria? Sergio: The access control list is like the gatekeeper into your local area network. Think about the access control list as the visa on your passport, assuming that the country is your local area network. Now, when you have a passport, you might get a visa that allows you to go into a certain country. So the access control list is a list of rules that defines which users, groups, or systems have permissions to access specific resources on your networks. It is a gatekeeper, that is going to specify who's allowed and who's denied. If you don't have a visa to go into a specific country, then you are denied. Similar here, if you are not part of the rule, if the service that you're trying to access is not part of the rules, then you cannot get in. 09:37 Lois: That's a great analogy, Sergio. Now, let's turn our attention to one of the core elements of network security: authentication and authorization. Orlando, can you explain why authentication and authorization are such crucial aspects of a secure cloud network? Orlando: Security is one of the most critical pillars in modern IT systems. Whether you are running a small web app or managing global infrastructure, every secure system starts by answering two key questions. Who are you, and what are you allowed to do? This is the essence of authentication and authorization. Authentication is the first step in access control. It's how a system verifies that you are who you claim to be. Think of it like showing your driver's license at a security checkpoint. The guard checks your photo and personal details to confirm your identity. In IT systems, the same process happens using one or more of these factors. It will ask you for something you know, like a password. It will ask you for something that you have, like a security token, or it will ask you for something that you are, like a fingerprint. An identity does not refer to just a person. It's any actor, human or not, that interacts with your systems. Users are straightforward, think employees logging into a dashboard. But services and machines are equally important. A backend API may need to read data from a database, or a virtual machine may need to download updates. Treating these non-human identities with the same rigor as human ones helps prevent unauthorized access and improves visibility and security. After confirming your identity, can the system move on to deciding what you're allowed to access? That's where authorization comes in. Once authentication confirms who you are, authorization determines what you are allowed to do. Sticking with the driver's license analogy, you've shown your license and proven your identity, but that doesn't mean that you can drive anything anywhere. Your license class might let you drive a car, not a motorcycle or a truck. It might be valid in your country, but not in others. Similarly, in IT systems, authorization defines what actions you can take and on which resources. This is usually controlled by policies and roles assigned to your identity. It ensures that users or services only get access to the things they are explicitly allowed to interact with. 12:34 Nikita: How can organizations ensure secure access across their systems, especially when managing multiple users and resources? Orlando: Identity and Access Management governs who can do what in our systems. Individually, authentication verifies identity and authorization grants access. However, managing these processes at scale across countless users and resources becomes a complex challenge. That's where Identity and Access Management, or IAM, comes in. IAM is an overarching framework that centralizes and orchestrates both authentication and authorization, along with other critical functions, to ensure secure and efficient access to resources. 13:23 Lois: And what are the key components and methods that make up a robust IAM system? Orlando: User management, a core component of IAM, provides a centralized Identity Management system for all user accounts and their attributes, ensuring consistency across applications. Key functions include user provisioning and deprovisioning, automating account creation for new users, and timely removal upon departure or role changes. It also covers the full user account lifecycle management, including password policies and account recovery. Lastly, user management often involves directory services integration to unify user information. Access management is about defining access permissions, specifically what actions users can perform and which resources they can access. A common approach is role-based access control, or RBAC, where permissions are assigned to roles and users inherit those permissions by being assigned to roles. For more granular control, policy-based access control allows for rules based on specific attributes. Crucially, access management enforces the principle of least privilege, granting only the minimum necessary access, and supports segregation of duties to prevent conflicts of interest. For authentication, IAM systems support various methods. Single-factor authentication, relying on just one piece of evidence like a password, offers basic security. However, multi-factor authentication significantly boosts security by requiring two or more distinct verification types, such as a password, plus a one-time code. We also have biometric authentication, using unique physical traits and token-based authentication, common for API and web services. 15:33 Lois: Orlando, when it comes to security, it's not just about who can access what, but also about keeping track of it all. How does auditing and reporting maintain compliance? Orlando: Auditing and reporting are essential for security and compliance. This involves tracking user activities, logging all access attempts and permission changes. It's vital for meeting compliance and regulatory requirements, allowing you to generate reports for audits. Auditing also aids in security incident detection by identifying unusual activities and providing data for forensic analysis after an incident. Lastly, it offers performance and usage analytics to help optimize your IAM system. 16:22 Nikita: That was an incredibly informative conversation. Thank you, Sergio and Orlando, for sharing your expertise with us. If you'd like to dive deeper into these concepts, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: I agree! This was such a great conversation! Don't miss next week's episode, where we'll continue exploring key security concepts to help organizations operate in a scalable, secure, and auditable way. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
In this episode, hosts Lois Houston and Nikita Abraham team up with Senior Principal OCI Instructor Sergio Castro to unpack the basics of cloud networking and the Domain Name System (DNS). You'll learn how local and virtual networks connect devices, and how DNS seamlessly translates familiar names like oracle.com into addresses computers understand. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last few weeks, we've been talking about different aspects of cloud data centers. Today, we're focusing on something that's absolutely key to how everything works in the cloud: networking and domain name systems. 00:52 Lois: And to guide us through it, we've got Sergio Castro, Senior Principal OCI Instructor at Oracle University. We'll start by trying to understand why networking is so crucial and how it connects everything behind the scenes. Sergio, could you explain what networking means in simple terms, especially for folks new to cloud tech? Sergio: Networking is the backbone of cloud computing. It is a fundamental service because it provides the infrastructure for connecting users, applications, and resources within a cloud environment. It basically enables data transfers. It facilitates remote access. And ensures that cloud services are accessible to users. This provided that these users have the correct credentials. 01:38 Nikita: Ok, can you walk us through how a typical network operates? Sergio: In networking, typically starts with the local area network. Basically, networking is a crucial component for any IT service because it's the foundation for the architecture framework of any of the services that we consume today. So, a network is two or more computers interconnected to each other. And not necessarily it needs to be a computer. It can be another device such as a printer or an IP TV or an IP phone or an IP camera. Many devices can be part of a local area network. And a local area network can be very small. Like I mentioned before, two or more computers, or it could grow into a very robust and complicated set of interconnected networks. And if that happens, then it can become very expensive as well. Cloud networking, it's the Achilles heel for many of the database administrators, programmers, quality assurance engineers, any IT other than a network administrator. Actually, when the network starts to grow, managing access and permissions and implementing robust security measures, this coupled with the critical importance of reliable, and secure performance, can create significant hurdles. 03:09 Nikita: What are the different types of networks we have? Sergio: A local area network is basically in one building. It covers… it can be maybe two buildings that are in close proximity in a small campus, but typically it's very small by definition, and they're all interconnected to each other via one router, typically. A metropolitan area network is a typical network that spans into a city or a metro area, hence the name metropolitan area network. So, one building can be on one edge of the city and the other building can be at the other edge of the city, and they are interconnected by a digital circuit typically. So that's the case. It's more than one building, and the separation of those buildings is considerable. It can go into several miles. And a wide area network is a network that spans multiple cities, states, countries, even international. 04:10 Lois: I think we'll focus on the local area network for today's conversation. Could you give us a real-world example, maybe what a home office network setup looks like? Sergio: If you are accessing this session from your home office or from your office or corporate office even, but a home office or a home network, typically, you have a router that is being provided to you by the internet vendor—the internet service provider. And then you have your laptop or your computer, your PC connected to that router. And then you might have other devices either connected via cable—ethernet cable—or Wi-Fi. And the interconnectivity within that small building is what makes a local area network. And it looks very similar once you move on into a corporate office. Again, it's two or more computers interconnected. That's what makes a local area network. In a corporate office, the difference with a home office or your home is that you have many more computers. And because you have many more computers, that local area network might be divided into subnets. And for that, you need a switch. So, you have additional devices like a switch and a firewall and the router. And then you might have a server as well. So that's the local area network. Two or more computers. And local area networks are capable of high speeds because they are in close proximity to each other. 05:47 Nikita: Ok… so obviously a local area network has several different components. Let's break them down. What's a client, what's a server, and how do they interact? Sergio: A client basically is a requester of a service. Like when you hop into your browser and then you want to go to a website, for example, oracle.com, you type www.oracle.com, you are requesting a service from a server. And that server typically resides in a data center like oracle.com under the Oracle domain is a big data center with many interconnected servers. Interconnected so they can concurrently serve multiple millions of requests coming into www.oracle.com at the same time. So, servers provide services to client computers. So basically, that's the relation. A client requests a service and the server provides that service. 06:50 Lois: And what does that client-server setup actually look like? Sergio: So, let's continue with our example of a web browser requesting a service from a web server. So, in this case, the physical computer is the server. And then it has a software running on it. And that makes it a web server. So, once you type www.oracle.com, it sends the request and the request is received. And provided that everything's configured correctly and that there are no typos, then it will provide a response and basically give the view of the website. And that's obviously in the local area network, maybe quality assurance when they were testing this for going live. But when it goes live, then you have the internet in the middle. And the internet in the middle then have many routers, hubs, switches. 07:51 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today! 08:16 Nikita: Welcome back! Sergio, would this client-server model also apply to my devices at home? Sergio: In your own local area network, you have client server even without noticing. For example, let's go back to our home office example. What happens if we add another laptop into the scenario? Then all of these devices, they need a way for them to communicate. And for that, they have an IP address. And who provides that IP address? The minute that you add, the other device is going to send a request to the router. The router, we call it router, but it has multiple functions like the mobile device, the handheld device that we call smartphone. It has many functions like camera and calendar and many other functionalities. The router has an additional functionality called the dynamic host configuration protocol at DHCP server. So basically, the laptop requests, hey, give me an IP address, and then the router or the DHCP server replies, here's your IP address. And it's going to be a different one. So, they don't overlap. So that's an example of client server. 09:32 Lois: And where do virtual networks fit into all this? Sergio: A virtual network is basically, a software version of the physical network. It looks and feels exactly as a physical network does. We do have a path or a communication, in this case, in the physical network, you have either Wi-Fi or you have internet cable. And then you add your workstations or devices on top of that. And then you might create subnets. So, in a software-defined network or in a virtual network, you have a software-defined connectivity, physical cable and all of that. Everything is software-defined. And it looks exactly the same, except that everything is software. In a software or a virtual network, you can communicate with a physical network as if that software or that virtual network was another physical network. Again, this is a software network or a software-defined network, a virtual network, no longer a physical network. 10:42 Lois: Let's switch gears a little and talk about Domain Name Systems. Sergio, can you explain what DNS is, and why it's important when we browse the web? Sergio: DNS is the global database for internet addressing. The DNS plays a very important role on the internet. And many internet services are closely related to DNS. The main functionality of DNS is to translate easy-to-remember names into IP addresses. Some IP addresses might be very easy to remember. But however, if you have many of them, then it's easier to remember oracle.com or ucla.edu or navy.mil for military or eds.org for organization or gobierno.mx for Mexico. So that's the main feature of the DNS. It's very similar to a mobile phone to the contacts application in your mobile phone, because the contacts application maps names to phone numbers. It's easier to remember Bob's phone than 555-123-4567. So, it's easier to remember the name of the persons in your contacts list, like it is easier to remember, as previously mentioned, oracle.com than 138.1.33.162. Again, 138.1.33.162 might be easy for you to remember if that's the only one that you need to remember. But if you have 20, 40, 50, like we do with phone numbers, it's easier to remember oracle.com or ucla.edu. And this is essential, this mapping, again, because we work with names it's easier for us to remember. However, the fact is that computers, they still need to use IP addresses. And remember that this is the decimal representation of the binary number. It's a lot harder for us to remember the 32 bits or each one of the octets in binary. So that's the main purpose of DNS. Now the big difference is that the contact list in a cell phone is unique to that individual phone. However, DNS is global. It applies to everybody in the world. Anybody typing oracle.com will translate that into 138.1.33.162. Now this is an actual IP address of oracle.com. Oracle.com has many IP addresses. If you ping oracle.com, chances are that this is one of the many addresses that maps to oracle.com. 13:35 Nikita: You mentioned that a domain name like oracle.com can have many IP addresses. So how does DNS help my computer find the right one? Sergio: So, let's say that you want to look for www.example.com, how do you do that? So, you type in your computer instance or in your terminal, in your laptop, in your computer, you type in your browser "www.example.com." If the browser doesn't have that information in cache, then it's going to first ask your DNS server, the one that you have assigned and indicating in your browser's configuration. And if the DNS server then it will relate that the information is 96.7.128.198. This address is real, and your browser will go to this address once you type www.example.com. 14:34 Nikita: But what happens if the browser doesn't know the address? Sergio: This is where it gets interesting. Your browser wants to go to www.example.com. And it's going to go and look within its cache. If it doesn't have it, then the first step is to go ahead to your DNS server and ask them, hey, if you don't know this address, go ahead and find out. So, it goes to the root server. All the servers are administrated by IANA. And it's going to send the information, hey, what's the IP address for www.example.com? And if the root server doesn't know it, it's going to let you know, hey, ask the top-level domain name server, in this case, the .com. It's a top-level domain name server. So, you go ahead and ask this top-level domain name server to do that for you. In this case, again, the .com and you asked, hey, what's the IP address for example.com? And if the top-level domain name server doesn't know, it's going to ask you, hey, ask example.com. And example.com is actually within the customer's domain. And then based on these instructions you ask, what is the IP address for www.example.com? So, it will provide you with the IP address. And once your DNS server has the IP address, then it's going to relate to your web browser. And this is where your web browser actually reaches 96.7.128.198. Very interesting, isn't it? 16:23 Lois: Absolutely! Sergio, you mentioned top-level domain names. What are they and how are they useful? Sergio: A top level domain is the rightmost segment of a domain name, and it's located after the last visible dot in the domain name. So oracle.com or cloud.oracle.com is a domain name. So, .com is a top-level domain. And the purpose of the top-level domain is to recognize certain elements of a website. This top-level domain indicates that this is a commercial site. Now, .edu, for example, is a top-level domain name for higher education. We also have .org for nonprofit organizations, .net for network service providers. And we also have country specific. .ca for Canadian websites, .it for Italian websites. Now .it, a lot of companies that are in the information technology business utilizes this one to indicate that they're in information technology. There's also the .us. And for US companies, most of the time this is optional. .com, .org, .net is understood that they are from the US. Now if .com is a top-level domain name, what is that .oracle in cloud? So, Oracle is the second-level domain name. And in this case, Cloud is the third-level domain name. And lately you've been seeing a lot more top-level domain names. These are the classic ones. But now you get .AI, .media, .comedy, .people, and so on and so forth. You have many, many, even companies now have the option of registering their company name as the top-level domain name. 18:24 Nikita: Thank you, Sergio, for this deep dive into local area networks and domain name systems. If you want to learn about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: And don't forget to join us next week for another episode on networking essentials. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 18:46 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
1 Cloud Data Centers: Core Concepts - Part 4 13:56
13:56
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé13:56
In this episode, hosts Lois Houston and Nikita Abraham, along with Principal OCI Instructor Orlando Gentil, break down the differences between Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. The conversation explores how each framework influences control, cost efficiency, expansion, reliability, and contingency planning. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about how hypervisors, virtual machines, and containers have transformed data centers. Today, we're moving on to something just as important—the main cloud models that drive modern cloud computing. Nikita: Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again for part four of our discussion on cloud data centers. 01:01 Lois: Hi Orlando! Glad to have you with us today. Can you walk us through the different types of cloud models? Orlando: These are commonly categorized into three main service models: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Let's use the idea of getting around town to understand cloud service models. IaaS is like renting a car. You don't own the car, but you control where it goes, how fast, and when to stop. In cloud terms, the provider gives you the infrastructure—virtual machines, storage, and networking—but you manage everything on top—the OS, middleware, runtime, and application. Thus, it's like using a shuttle service. You bring your bags—your code, pick your destination—your app requirements, but someone else drives and maintains the vehicle. You don't worry about the engine, fuel, or routing planning. That's the platform's job. Your focus stays on development and deployment, not on servers or patching. SaaS is like ordering a taxi. You say where you want to go and everything else is handled for you. It's the full-service experience. In the cloud, SaaS is software UXs over the web—Email, CRM, project management. No infrastructure, no updates, just productivity. 02:32 Nikita: Ok. How do the trade-offs between control and convenience differ across SaaS, PaaS, and IaaS? Orlando: With IaaS, much like renting a car, you gain high control. You are managing components like the operating system, runtime, your applications, and your data. In return, the provider expertly handles the underlying virtual machines, storage, and networking. This model gives you immense flexibility. Moving to PaaS, our shuttle service, you shift to a medium level of control but gain significantly higher convenience. Your primary focus remains on your application code and data. The provider now takes on the heavy lifting of managing the runtime environment, the operating system, the servers themselves, and even the scaling. Finally, SaaS, our taxi service, offers the highest convenience with the lowest control level. Here, your responsibility is essentially just using the application and managing your specific configurations or data within it. The cloud provider manages absolutely everything else—the entire infrastructure, the platform, and the application itself. 03:52 Nikita: One of the top concerns for cloud users is cost optimization. How can we manage this? Orlando: Each cloud service model offers distinct strategies to help you manage and reduce your spending effectively, as well as different factors that drives those costs. For Infrastructure-as-a-Service, where you have more control, optimization largely revolves around smart resource management. This means rightsizing your VMs, ensuring they are not overprovisioned, and actively turning off idle resources when not in use. Leveraging preemptible or spot instances for flexible workloads can also significantly cut costs. Your charges here are directly tied to your compute, storage, and network usage, so efficiency is key. Moving to Platform-as-a-Service, where the platform is managed for you, optimization shifts slightly. Strategies include choosing scalable platforms that can efficiently handle fluctuating demand, opting for consumption-based pricing where available, and diligently optimizing your runtime usage to minimize processing time. Costs in PaaS are typically based on your application usage, runtime hours, and storage consumed. Finally, for Software-as-a-Service where you can consume a ready-to-use application, cost optimization centers on licensing and usage. This involves consolidating tools to avoid redundant subscriptions, selecting usage-based plans if they align better with your needs, and crucially, eliminating any unused license. SaaS costs are generally based on subscription or per user fees. Understanding these nuances is essential for effective cloud financial management. 05:52 Lois: Ok. And what about scalability? How does each model handle the ability to grow and shrink with demand, without needing manual hardware changes? Orlando: How you achieve and manage that scalability varies significantly across our three service models. For Infrastructure-as-a-Service, you have the most direct control over scaling. You can implement manual or auto scaling by adding or removing virtual machines as needed, often leveraging load balancers to distribute traffic. In this model, you configure the scaling policies and parameters based on your specific workload. Moving to Platform-as-a-Service, the scaling becomes more automated and elastic. The platform automatically adjusts resources based on your application's demand, allowing it to seamlessly handle traffic spikes or dips. Here, the provider manages the underlying scaling behavior, freeing you from that operational burden. Finally, with Software-as-a-Service, scalability is largely abstracted and invisible to the user. The application scales automatically in the background, with the entire process fully managed by the provider. As a user, you simply benefit from the application's ability to handle millions of users without ever needing to worry about the infrastructure. Understanding these scaling differences is crucial for selecting the right model for your application's need. 07:34 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 08:05 Nikita: Welcome back! We've talked about cost optimization and scalability in cloud environments. But what about ensuring availability? How does that work? Orlando: Availability refers to the ability of a system or service to remain accessible in operational, even in the face of failures or extremely high demand. The approach of achieving and managing availability, and crucially, your role versus the provider's differs greatly across each model. With Infrastructure-as-a-Service, you have the most direct control over your availability strategy. You will be responsible for designing an architecture that includes redundant VMs, deploying load balancers, and potentially even multi-region setups for disaster recovery. Your specific roles involves designing this architecture and managing your failover process and data backups. The provider's role, in turn, is to deliver the underlying infrastructure with defined service level agreements, SLAs, and health monitoring. For Platform-as-a-Service, the platform itself offers a higher degree of built-in, high availability, and automated failover. While the provider maintains the runtime platform's availability, your role shifts. You need to ensure your application's logic is designed to gracefully handle retries and potential transient failures that might occur. Finally, with Software-as-a-Service, availability is almost entirely handled for you. The provider ensures fully abstracted redundancy and failover behind the scenes. Your role becomes largely minimal, often just involving a specific application's configurations. The provider is entirely responsible for the full application uptime and the underlying high availability infrastructure. Understanding these distinct roles in ensuring availability is essential for setting expectations and designing your cloud strategy efficiently. 10:19 Lois: Building on availability, let's talk Disaster Recovery. Orlando: DR is about ensuring your systems and data can be recovered and brought back online in the event of a significant failure, whether it's a hardware crash, a natural disaster, or even human error. Just like the other aspects, the strategy and responsibilities for DR vary significantly across the cloud service models. For Infrastructure-as-a Service, you have the most direct involvement in your DR strategy. You need to design and execute custom DR plans. This involves leveraging capabilities like multi-region backups, taking VM snapshots, and setting up failover clusters. A real-world example might be using Oracle Cloud compute to replicate your VMs to a secondary region with block volume backups to ensure business continuity. Essentially, you manage your entire DR process here. Moving to Platform-as-a-Service, disaster recovery becomes a shared responsibility. The platform itself offers built-in redundancy and provide APIs for backup and restore. Your role will be to configure the application-level recovery and ensure your data is backed up appropriately, while the provider handles the underlying infrastructure's DR capability. An example could be Azure app service, Oracle APEX applications, where your apps are redeployed from source control like Git after an incident. Finally, with Software-as-a-Service, disaster recovery is almost entirely vendor managed. The provider takes full responsibility, offering features like auto replication and continuous backup, often backed by specific Recovery Point Objective (RPO) and Recovery Time Objective (RTO) SLAs. A common example is how Microsoft 365 or Salesforce manage user data backups in restoration. It's all handled seamlessly by the provider without your direct intervention. Understanding these different approaches to DR is crucial for defining your own business continuity plans in the cloud. 12:46 Lois: Thank you, Orlando, for this insightful discussion. To recap, we spoke about the three main cloud models: IaaS, PaaS, and SaaS, and how each one offers a different mix of control and convenience, impacting cost, scalability, availability, and recovery. Nikita: Yeah, hopefully this helps you pick the right cloud solution for your needs. If you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at the essentials of networking. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
1 Cloud Data Centers: Core Concepts - Part 3 15:09
15:09
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé15:09
Have you ever considered how a single server can support countless applications and workloads at once? In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers. They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That's right, Niki. We'll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that? Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure. 05:09 Nikita: Ok, Orlando. Let's get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We've spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications. 07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let's move on to containers. Many see them as a lighter, more agile way to build and run applications. What's your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you've spoken about the various advantages of virtualization but let's consolidate them now. Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That's amazing. Thank you, Orlando, for joining us once again. Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: Well, that's all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
1 Cloud Data Centers: Core Concepts - Part 2 14:16
14:16
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé14:16
Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud? In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage. They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one. Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage. 01:04 Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server? Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 05:59 Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let's talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 07:38 Nikita: And what might this approach be? Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 08:47 Nikita: And what's this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what's that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
1 Cloud Data Centers: Core Concepts - Part 1 16:45
16:45
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé16:45
Curious about what really goes on inside a cloud data center? In this episode, Lois Houston and Nikita Abraham chat with Principal OCI Instructor Orlando Gentil about how cloud data centers are transforming the way organizations manage technology. They explore the differences between traditional and cloud data centers, the roles of CPUs, GPUs, and RAM, and why operating systems and remote access matter more than ever. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Today, we're covering the fundamentals you need to be successful in a cloud environment. If you're new to cloud, coming from a SaaS environment, or planning to move from on-premises to the cloud, you won't want to miss this. With us today is Orlando Gentil, Principal OCI Instructor at Oracle University. Hi Orlando! Thanks for joining us. 01:01 Lois: So Orlando, we know that Oracle has been a pioneer of cloud technologies and has been pivotal in shaping modern cloud data centers, which are different from traditional data centers. For our listeners who might be new to this, could you tell us what a traditional data center is? Orlando: A traditional data center is a physical facility that houses an organization's mission critical IT infrastructure, including servers, storage systems, and networking equipment, all managed on site. 01:32 Nikita: So why would anyone want to use a cloud data center? Orlando: The traditional model requires significant upfront investment in physical hardware, which you are then responsible for maintaining along with the underlying infrastructure like physical security, HVAC, backup power, and communication links. In contrast, cloud data centers offer a more agile approach. You essentially rent the infrastructure you need, paying only for what you use. In the traditional data center, scaling resources up and down can be a slow and complex process. On cloud data centers, scaling is automated and elastic, allowing resources to adjust dynamically based on demand. This shift allows business to move their focus from the constant upkeep of infrastructure to innovation and growth. The move represents a shift from maintenance to momentum, enabling optimized costs and efficient scaling. This fundamental shift is how IT infrastructure is managed and consumed, and precisely what we mean by moving to the cloud. 02:39 Lois: So, when we talk about moving to the cloud, what does it really mean for businesses today? Orlando: Moving to the cloud represents the strategic transition from managing your own on-premise hardware and software to leveraging internet-based computing services provided by a third-party. This involves migrating your applications, data, and IT operations to a cloud environment. This transition typically aims to reduce operational overhead, increase flexibility, and enhance scalability, allowing organizations to focus more on their core business functions. 03:17 Nikita: Orlando, what's the "brain" behind all this technology? Orlando: A CPU or Central Processing Unit is the primary component that performs most of the processing inside the computer or server. It performs calculations handling the complex mathematics and logic that drive all applications and software. It processes instructions, running tasks, and operations in the background that are essential for any application. A CPU is critical for performance, as it directly impacts the overall speed and efficiency of the data center. It also manages system activities, coordinating user input, various application tasks, and the flow of data throughout the system. Ultimately, the CPU drives data center workloads from basic server operations to powering cutting edge AI applications. 04:10 Lois: To better understand how a CPU achieves these functions and processes information so efficiently, I think it's important for us to grasp its fundamental architecture. Can you briefly explain the fundamental architecture of a CPU, Orlando? Orlando: When discussing CPUs, you will often hear about sockets, cores, and threads. A socket refers to the physical connection on the motherboard where a CPU chip is installed. A single server motherboard can have one or more sockets, each holding a CPU. A core is an independent processing unit within a CPU. Modern CPUs often have multiple cores, enabling them to handle several instructions simultaneously, thus increasing processing power. Think of it as having multiple mini CPUs on a single chip. Threads are virtual components that allow a single CPU core to handle multiple sequence of instructions or threads concurrently. This technology, often called hyperthreading, makes a single core appear as two logical processors to the operating system, further enhancing efficiency. 05:27 Lois: Ok. And how do CPUs process commands? Orlando: Beyond these internal components, CPUs are also designed based on different instruction set architectures which dictate how they process commands. CPU architectures are primarily categorized in two designs-- Complex Instruction Set Computer or CISC and Reduced Instruction Set Computer or RISC. CISC processors are designed to execute complex instructions in a single step, which can reduce the number of instructions needed for a task, but often leads to a higher power consumption. These are commonly found in traditional Intel and AMD CPUs. In contrast, RISC processors use a simpler, more streamlined set of instructions. While this might require more steps for a complex task, each step is faster and more energy efficient. This architecture is prevalent in ARM-based CPUs. 06:34 Are you looking to boost your expertise in enterprise AI? Check out the Oracle AI Agent Studio for Fusion Applications Developers course and professional certification—now available through Oracle University. This course helps you build, customize, and deploy AI Agents for Fusion HCM, SCM, and CX, with hands-on labs and real-world case studies. Ready to set yourself apart with in-demand skills and a professional credential? Learn more and get started today! Visit mylearn.oracle.com for more details. 07:09 Nikita: Welcome back! We were discussing CISC and RISC processors. So Orlando, where are they typically deployed? Are there any specific computing environments and use cases where they excel? Orlando: On the CISC side, you will find them powering enterprise virtualization and server workloads, such as bare metal hypervisors in large databases where complex instructions can be efficiently processed. High performance computing that includes demanding simulations, intricate analysis, and many traditional machine learning systems. Enterprise software suites and business applications like ERP, CRM, and other complex enterprise systems that benefit from fewer steps per instruction. Conversely, RISC architectures are often preferred for cloud-native workloads such as Kubernetes clusters, where simpler, faster instructions and energy efficiency are paramount for distributed computing. Mobile device management and edge computing, including cell phones and IoT devices where power efficiency and compact design are critical. Cost optimized cloud hosting supporting distributed workloads where the cumulative energy savings and simpler design lead to more economical operations. The choice between CISC and RISC depends heavily on the specific workload and performance requirements. While CPUs are versatile generalists, handling a broad range of tasks, modern data centers also heavily rely on another crucial processing unit for specialized workloads. 08:54 Lois: We've spoken a lot about CPUs, but our conversation would be incomplete without understanding what a Graphics Processing Unit is and why it's important. What can you tell us about GPUs, Orlando? Orlando: A GPU or Graphics Processing Unit is distinct from a CPU. While the CPU is a generalist excelling at sequential processing and managing a wide variety of tasks, the GPU is a specialist. It is designed specifically for parallel compute heavy tasks. This means it can perform many calculations simultaneously, making it incredibly efficient for workloads like rendering graphics, scientific simulations, and especially in areas like machine learning and artificial intelligence, where massive parallel computation is required. In the modern data center, GPUs are increasingly vital for accelerating these specialized, data intensive workloads. 09:58 Nikita: Besides the CPU and GPU, there's another key component that collaborates with these processors to facilitate efficient data access. What role does Random Access Memory play in all of this? Orlando: The core function of RAM is to provide faster access to information in use. Imagine your computer or server needing to retrieve data from a long-term storage device, like a hard drive. This process can be relatively slow. RAM acts as a temporary high-speed buffer. When your CPU or GPU needs data, it first checks RAM. If the data is there, it can be accessed almost instantaneously, significantly speeding up operations. This rapid access to frequently used data and programming instructions is what allows applications to run smoothly and systems to respond quickly, making RAM a critical factor in overall data center performance. While RAM provides quick access to active data, it's volatile, meaning data is lost when power is off, or persistent data storage, the information that needs to remain available even after a system shut down. 11:14 Nikita: Let's now talk about operating systems in cloud data centers and how they help everything run smoothly. Orlando, can you give us a quick refresher on what an operating system is, and why it is important for computing devices? Orlando: At its core, an operating system, or OS, is the fundamental software that manages all the hardware and software resources on a computer. Think of it as a central nervous system that allows everything else to function. It performs several critical tasks, including managing memory, deciding which programs get access to memory and when, managing processes, allocating CPU time to different tasks and applications, managing files, organizing data on storage devices, handling input and output, facilitate communication between the computer and its peripherals, like keyboards, mice, and displays. And perhaps, most importantly, it provides the user interface that allows us to interact with the computer. 12:19 Lois: Can you give us a few examples of common operating systems? Orlando: Common operating system examples you are likely familiar with include Microsoft Windows and MacOS for personal computers, iOS and Android for mobile devices, and various distributions of Linux, which are incredibly prevalent in servers and increasingly in cloud environments. 12:41 Lois: And how are these operating systems specifically utilized within the demanding environment of cloud data centers? Orlando: The two dominant operating systems in data centers are Linux and Windows. Linux is further categorized into enterprise distributions, such as Oracle Linux or SUSE Linux Enterprise Server, which offer commercial support and stability, and community distributions, like Ubuntu and CentOS, which are developed and maintained by communities and are often free to use. On the other side, we have Windows, primarily represented by Windows Server, which is Microsoft's server operating system known for its robust features and integration with other Microsoft products. While both Linux and Windows are powerful operating systems, their licensing modes can differ significantly, which is a crucial factor to consider when deploying them in a data center environment. 13:43 Nikita: In what way do the licensing models differ? Orlando: When we talk about licensing, the differences between Linux and Windows become quite apparent. For Linux, Enterprise Distributions come with associated support fees, which can be bundled into the initial cost or priced separately. These fees provide access to professional support and updates. On the other hand, Community Distributions are typically free of charge, with some providers offering basic community-driven support. Windows server, in contrast, is a commercial product. Its license cost is generally included in the instance cost when using cloud providers or purchased directly for on-premise deployments. It's also worth noting that some cloud providers offer a bring your own license, or BYOL program, allowing organizations to use their existing Windows licenses in the cloud, which can sometimes provide cost efficiencies. 14:46 Nikita: Beyond choosing an operating system, are there any other important aspects of data center management? Orlando: Another critical aspect of data center management is how you remotely access and interact with your servers. Remote access is fundamental for managing servers in a data center, as you are rarely physically sitting in front of them. The two primary methods that we use are SSH, or secure shell, and RDP, remote desktop. Secure shell is widely used for secure command line access for Linux servers. It provides an encrypted connection, allowing you to execute commands, transfer files, and manage your servers securely from a remote location. The remote desktop protocol is predominantly used for graphical remote access to Windows servers. RDP allows you to see and interact with the server's desktop interface, just as if you were sitting directly in front of it, making it ideal for tasks that require a graphical user interface. 15:54 Lois: Thank you so much, Orlando, for shedding light on this topic. Nikita: Yeah, that's a wrap for today! To learn more about what we discussed, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at how data is stored and managed. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 16:16 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
1 AI Across Industries and the Importance of Responsible AI 18:55
18:55
Lire Plus Tard
Lire Plus Tard
Des listes
J'aime
Aimé18:55
AI is reshaping industries at a rapid pace, but as its influence grows, so do the ethical concerns that come with it. This episode examines how AI is being applied across sectors such as healthcare, finance, and retail, while also exploring the crucial issue of ensuring that these technologies align with human values. In this conversation, Lois Houston and Nikita Abraham are joined by Hemant Gahankari, Senior Principal OCI Instructor, who emphasizes the importance of fairness, inclusivity, transparency, and accountability in AI systems. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about how Oracle integrates AI capabilities into its Fusion Applications to enhance business workflows, and we focused on Predictive, Generative, and Agentic AI. Lois: Today, we'll discuss the various applications of AI. This is the final episode in our AI series, and before we close, we'll also touch upon ethical and responsible AI. 01:01 Nikita: Taking us through all of this is Senior Principal OCI Instructor Hemant Gahankari. Hi Hemant! AI is pretty much everywhere today. So, can you explain how it is being used in industries like retail, hospitality, health care, and so on? Hemant: AI isn't just for sci-fi movies anymore. It's helping doctors spot diseases earlier and even discover new drugs faster. Imagine an AI that can look at an X-ray and say, hey, there is something sketchy here before a human even notices. Wild, right? Banks and fintech companies are all over AI. Fraud detection. AI has got it covered. Those robo advisors managing your investments? That's AI too. Ever noticed how e-commerce companies always seem to know what you want? That's AI studying your habits and nudging you towards that next purchase or binge watch. Factories are getting smarter. AI predicts when machines will fail so they can fix them before everything grinds to a halt. Less downtime, more efficiency. Everyone wins. Farming has gone high tech. Drones and AI analyze crops, optimize water use, and even help with harvesting. Self-driving cars get all the hype, but even your everyday GPS uses AI to dodge traffic jams. And if AI can save me from sitting in bumper-to-bumper traffic, I'm all for it. 02:40 Nikita: Agreed! Thanks for that overview, but let's get into specific scenarios within each industry. Hemant: Let us take a scenario in the retail industry-- a retail clothing line with dozens of brick-and-mortar stores. Maintaining proper inventory levels in stores and regional warehouses is critical for retailers. In this low-margin business, being out of a popular product is especially challenging during sales and promotions. Managers want to delight shoppers and increase sales but without overbuying. That's where AI steps in. The retailer has multiple information sources, ranging from point-of-sale terminals to warehouse inventory systems. This data can be used to train a forecasting model that can make predictions, such as demand increase due to a holiday or planned marketing promotion, and determine the time required to acquire and distribute the extra inventory. Most ERP-based forecasting systems can produce sophisticated reports. A generative AI report writer goes further, creating custom plain-language summaries of these reports tailored for each store, instructing managers about how to maximize sales of well-stocked items while mitigating possible shortages. 04:11 Lois: Ok. How is AI being used in the hospitality sector, Hemant? Hemant: Let us take an example of a hotel chain that depends on positive ratings on social media and review websites. One common challenge they face is keeping track of online reviews, leading to missed opportunities to engage unhappy customers complaining on social media. Hotel managers don't know what's being said fast enough to address problems in real-time. Here, AI can be used to create a large data set from the tens of thousands of previously published online reviews. A textual language AI system can perform a sentiment analysis across the data to determine a baseline that can be periodically re-evaluated to spot trends. Data scientists could also build a model that correlates these textual messages and their sentiments against specific hotel locations and other factors, such as weather. Generative AI can extract valuable suggestions and insights from both positive and negative comments. 05:27 Nikita: That's great. And what about Financial Services? I know banks use AI quite often to detect fraud. Hemant: Unfortunately, fraud can creep into any part of a bank's retail operations. Fraud can happen with online transactions, from a phone or browser, and offsite ATMs too. Without trust, banks won't have customers or shareholders. Excessive fraud and delays in detecting it can violate financial industry regulations. Fraud detection combines AI technologies, such as computer vision to interpret scanned documents, document verification to authenticate IDs like driver's licenses, and machine learning to analyze patterns. These tools work together to assess the risk of fraud in each transaction within seconds. When the system detects a high risk, it triggers automated responses, such as placing holds on withdrawals or requesting additional identification from customers, to prevent fraudulent activity and protect both the business and its client. 06:42 Nikita: Wow, interesting. And how is AI being used in the health industry, especially when it comes to improving patient care? Hemant: Medical appointments can be frustrating for everyone involved—patients, receptionists, nurses, and physicians. There are many time-consuming steps, including scheduling, checking in, interactions with the doctors, checking out, and follow-ups. AI can fix this problem through electronic health records to analyze lab results, paper forms, scans, and structured data, summarizing insights for doctors with the latest research and patient history. This helps practice reduced costs, boost earnings, and deliver faster, more personalized care. 07:32 Lois: Let's take a look at one more industry. How is manufacturing using AI? Hemant: A factory that makes metal parts and other products use both visual inspections and electronic means to monitor product quality. A part that fails to meet the requirements may be reworked or repurposed, or it may need to be scrapped. The factory seeks to maximize profits and throughput by shipping as much good material as possible, while minimizing waste by detecting and handling defects early. The way AI can help here is with the quality assurance process, which creates X-ray images. This data can be interpreted by computer vision, which can learn to identify cracks and other weak spots, after being trained on a large data set. In addition, problematic or ambiguous data can be highlighted for human inspectors. 08:36 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:20 Nikita: Welcome back! AI can be used effectively to automate a variety of tasks to improve productivity, efficiency, cost savings. But I'm sure AI has its constraints too, right? Can you talk about what happens if AI isn't able to echo human ethics? Hemant: AI can fail due to lack of ethics. AI can spot patterns, not make moral calls. It doesn't feel guilt, understand context, or take responsibility. That is still up to us. Decisions are only as good as the data behind them. For example, health care AI underdiagnosing women because research data was mostly male. Artificial narrow intelligence tends to automate discrimination at scale. Recruiting AI downgraded resumes just because it had a word "women's" (for example, women's chess club). Who is responsible when AI fails? For example, if a self-driving car hits someone, we cannot blame the car. Then who owns the failure? The programmer? The CEO? Can we really trust corporations or governments having programmed the use of AI not to be evil correctly? So, it's clear that AI needs oversight to function smoothly. 10:48 Lois: So, Hemant, how can we design AI in ways that respect and reflect human values? Hemant: Think of ethics like a tree. It needs all parts working together. Roots represent intent. That is our values and principles. The trunk stands for safeguards, our systems, and structures. And the branches are the outcomes we aim for. If the roots are shallow, the tree falls. If the trunk is weak, damage seeps through. The health of roots and trunk shapes the strength of our ethical outcomes. Fairness means nothing without ethical intent behind it. For example, a bank promotes its loan algorithm as fair. But it uses zip codes in decision-making, effectively penalizing people based on race. That's not fairness. That's harm disguised as data. Inclusivity depends on the intent sustainability. Inclusive design isn't just a check box. It needs a long-term commitment. For example, controllers for gamers with disabilities are only possible because of sustained R&D and intentional design choices. Without investment in inclusion, accessibility is left behind. Transparency depends on the safeguard robustness. Transparency is only useful if the system is secure and resilient. For example, a medical AI may be explainable, but if it is vulnerable to hacking, transparency won't matter. Accountability depends on the safeguard privacy and traceability. You can't hold people accountable if there is no trail to follow. For example, after a fatal self-driving car crash, deleted system logs meant no one could be held responsible. Without auditability, accountability collapses. So remember, outcomes are what we see, but they rely on intent to guide priorities and safeguards to support execution. That's why humans must have a final say. AI has no grasp of ethics, but we do. 13:16 Nikita: So, what you're saying is ethical intent and robust AI safeguards need to go hand in hand if we are to truly leverage AI we can trust. Hemant: When it comes to AI, preventing harm is a must. Take self-driving cars, for example. Keeping pedestrians safe is absolutely critical, which means the technology has to be rock solid and reliable. At the same time, fairness and inclusivity can't be overlooked. If an AI system used for hiring learns from biased past data, say, mostly male candidates being hired, it can end up repeating those biases, shutting out qualified candidates unfairly. Transparency and accountability go hand in hand. Imagine a loan rejection if the AI's decision isn't clear or explainable. It becomes impossible for someone to challenge or understand why they were turned down. And of course, robustness supports fairness too. Loan approval systems need strong security to prevent attacks that could manipulate decisions and undermine trust. We must build AI that reflects human values and has safeguards. This makes sure that AI is fair, inclusive, transparent, and accountable. 14:44 Lois: Before we wrap, can you talk about why AI can fail? Let's continue with your analogy of the tree. Can you explain how AI failures occur and how we can address them? Hemant: Root elements like do not harm and sustainability are fundamental to ethical AI development. When these roots fail, the consequences can be serious. For example, a clear failure of do not harm is AI-powered surveillance tools misused by authoritarian regimes. This happens because there were no ethical constraints guiding how the technology was deployed. The solution is clear-- implement strong ethical use policies and conduct human rights impact assessment to prevent such misuse. On the sustainability front, training AI models can consume massive amount of energy. This failure occurs because environmental costs are not considered. To fix this, organizations are adopting carbon-aware computing practices to minimize AI's environmental footprint. By addressing these root failures, we can ensure AI is developed and used responsibly with respect for human rights and the planet. An example of a robustness failure can be a chatbot hallucinating nonexistent legal precedence used in court filings. This could be due to training on unverified internet data and no fact-checking layer. This can be fixed by grounding in authoritative databases. An example of a privacy failure can be AI facial recognition database created without user consent. The reason being no consent was taken for data collection. This can be fixed by adopting privacy-preserving techniques. An example of a fairness failure can be generated images of CEOs as white men and nurses as women, minorities. The reason being training on imbalanced internet images reflecting societal stereotypes. And the fix is to use diverse set of images. 17:18 Lois: I think this would be incomplete if we don't talk about inclusivity, transparency, and accountability failures. How can they be addressed, Hemant? Hemant: An example of an inclusivity failure can be a voice assistant not understanding accents. The reason being training data lacked diversity. And the fix is to use inclusive data. An example of a transparency and accountability failure can be teachers could not challenge AI-generated performance scores due to opaque calculations. The reason being no explainability tools are used. The fix being high-impact AI needs human review pathways and explainability built in. 18:04 Lois: Thank you, Hemant, for a fantastic conversation. We got some great insights into responsible and ethical AI. Nikita: Thank you, Hemant! If you're interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…. Lois: And Lois Houston, signing off! 18:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
Want to make AI work for your business? In today's episode, Lois Houston and Nikita Abraham continue their discussion of AI in Oracle Fusion Applications by focusing on three key AI capabilities: predictive, generative, and agentic. Joining them is Principal Instructor Yunus Mohammed, who explains how predictive, generative, and agentic AI can optimize efficiency, support decision-making, and automate tasks—all without requiring technical expertise. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we explored the essential components of the Oracle AI stack and spoke about Oracle's suite of AI services. Nikita: Yeah, and in today's episode, we're going to go down a similar path and take a closer look at the AI functionalities within Oracle Fusion Applications. 00:53 Lois: With us today is Principal Instructor Yunus Mohammed. Hi Yunus! It's lovely to have you back with us. For anyone who doesn't already know, what are Oracle Fusion Cloud Applications? Yunus: Oracle Fusion Applications are a suite of cloud-based enterprise applications designed to run for your business across finance, HR, supply chain, sales, services and more, all on a unified platform. They are designed to help enterprises operate smarter, faster by embedding AI directly into business process. That means better forecasts in finance, faster hiring decisions in HR, and optimized supply chains, and more personalized customer experience. 01:42 Nikita: And we know they've been built for today's fast-paced, AI-driven business environment. So, what are the different functional pillars within Oracle Fusion Apps? Yunus: The first one is the ERP, Enterprise Resource Planning, which supports financials, procurements, and project management. It's the backbone of many organizations, or day-to-day operations. HCM or Human Capital Management, handles workforce-related processes such as hiring, payroll, performance, and talent development, helping HR teams operate more efficiently. SCM, the Supply Chain Management, enables businesses to manage their logistics, inventory, and suppliers and manufacturers in the business. It's particularly critical in industries with complex operations like retail and manufacturing. The CX, which is the Customer Experience, covers the full customer life cycle, which includes sales, marketing, and service. These models help the businesses connect with their customers more personally and proactively, whether through the targeted campaigns or responsive support. 03:02 Lois: Yunus, what sets Fusion apart? Yunus: What sets Fusion apart is how these applications work seamlessly together. They share data natively and continuously improve with AI and automation, giving you not just tools, but intelligence at scale. Oracle applications are built to be AI first, with a complete suite of finance, supply chain, manufacturing, HR, sales, service, and marketing, that is tightly coupled with our industry and data intelligence applications. The easiest and the most effective way to start building your organization's AI muscle is with AI embedded in Fusion applications. For example, if the customer needs to return a defective product, the service representative simply clicks on Ask Oracle for the answers. Since the AI agent is embedded in the application, it has contextual information about the customer, the order, and any special service, contract, or any other feature that is required for this process. The AI agent automatically figures out the return policy, including the options to send a replacement product immediately or offer a discount for the inconvenience, and also expedite shipping. Another AI agent sends a personalized email confirming details of the return, and different AI agent creates the replacement order for fulfillment and shipping. Our AI-embedded Fusion Applications can automate an end-to-end business process from service request to return order to fulfillment and shipping and then accounting. These are pre-built and tested so that all the worry and hard work is removed from the implementation point of view. They cover the core workflows. Basically, they address tasks that form part of the organization's core workflow. User requires no technical knowledge in the scenarios. 05:16 Lois: That's great! So, you don't need to be an AI expert or a data scientist to get going. Yunus: The outcomes are super fast in business softwares and context is everything. Just having the right information isn't enough. This is about having the information in the right place at the right time for it to be instantly actionable. They are ready from day one and can be optimized over time. They are powerful out of the box and only get better with day-to-day processes and performance. 05:55 Are you working towards an Oracle Certification this year? Join us at one of our certification prep live events in the Oracle University Learning Community. Get insider tips from seasoned experts and learn from others who have already taken their certifications. Go to community.oracle.com/ou to jump-start your journey towards certification today! 06:20 Nikita: Welcome back! So, when we talk about the AI capabilities in Fusion apps, I know we have different types. Can you tell us more about them? Yunus: Predictive AI is where it all started. These models analyze historical patterns and data to anticipate what might happen next. For example, predicting employee attrition, forecasting demand in supply chain, or flagging potential late payments in finance workflows. These are embedded into business processes to surface insights before action is needed. Then we have got the generative AI, which takes this a step more further. Instead of just providing insights, it creates content, such as auto-generating job descriptions, summarizing performance reviews, or even crafting draft responses to supplier queries. This saves time and boosts productivity across functions like HR, CX, and procurement. Last but not the least, we have got the agentic AI, which is the most advanced layer. These agents don't just provide suggestions, they take actions on behalf of the users. Think of an agent that not only recommends actions in a workflow, but also executes them, creating tasks, filling tickets, updating systems, and communicating with stakeholders, all autonomously but under user control. And importantly, many business scenarios today benefit from a blend of these types. For example, an AI assistant in Fusion HCM might predict employees turnover, which is predictive AI, generates tailored retention plans, which is generative, and it is generative AI, and initiate outreach or next steps, which is done by the process of agents, which is called agentic AI. So, Oracle integrates these capabilities in a harmonious way, enabling users to act faster, personalize at scale, and drive better business outcomes. 08:39 Lois: Ok, let's get into the specifics. How does Oracle use predictive AI across its Fusion apps, helping businesses anticipate what's coming and act proactively. Yunus: So in HCM, things like recommended jobs, in this, candidates visiting a potential employer's website encountered an improved online experience, whereby if they have uploaded their resumes, they will be shown job opportunities that match their skills and experience mix. This helps candidates who are unsure what to search by showing them roles and titles they may not have considered. Time to hire provides an estimated as to how long it will take for an HR team to fill an open role, but this is really useful not only in terms of planning, recruitment, but also in terms of understanding whether you might need some temporary cover and for how long will it actually take the process to complete. In the process of supply chain management, the predictive AI is leveraged to revolutionize transit time and estimated time of arrival, which is called as the predictive analysis, enhancing efficiency, and optimizing operations. It can flag abnormal patterns in supply or inventory. For example, if a batch of parts is behaving differently in the production line and predict future demands, helping avoid overstocking or stockouts is a process that can be done by using the SCM predictive analysis or predictive AI. In ERPs, where you can audit your expenses, plan for future expenses, and do dynamic discounting for vendors who are likely to accept earlier payments or earlier payment discounts, it can also speed up reimbursements by automated expense entries. In CX, you have the options to go with adaptive intelligence for sales, which helps representatives prioritize the leads and the likelihood that a specific lead will close, helping representatives focus their time and effort. So predictive scheduling and routing in service delivery ensures that the right resource is assigned to the right customer at the right time, boosting operational efficiency and customer satisfaction, also known as fatigue analysis. 11:23 Lois: Now let's shift our focus to generative AI. How does Oracle implement generative AI across HCM, ERP, Supply Chain, and CX? Yunus: So, in HCM, the generative AI can automatically generate performance review summaries from raw data, saving time for HR teams, and can help you in providing candidates with summaries of their interview process, feedback, and next steps, all auto generated. With AI assistance, goal creation for employees can be automated, and the system analyzes performance data and trends to propose meaningful and attainable goals, aligning them with organizational objectives and employee capabilities. In SCM, similarly, the generative AI process helps you in defining drafting summaries of purchase orders. So generative AI can automatically create clear, readable synopses, and can be summarized with complex negotiations and discussions, making it easier for supply chain managers to analyze supplier proposals, track negotiations, processes, and understand key takeaways. With predictive AI embedded, it is helping you to leverage to help generate the repairs of master definitions of summaries, and can generate descriptions for item based on their specification, helping product teams automatically generate catalog contents. With ERPs, you can automate the creation of business reports, offering more insights and actionable narratives, rather than just showing the raw data. The AI can provide context, interpretations, and recommendations. AI can also take raw project data and generate a comprehensive, easy-to-read project status, reports that stakeholders can quickly review. In CX, we have got service request summarization, which can provide these long summaries for the customer services and the tickets that have been requested by the customers, allowing support teams to understand the key points in the fraction of time, and can also create knowledge base articles directly from common service requests or inquiries, which not only improves internal knowledge management but also empowers customers by enabling self-service. So generative AI can automatically generate success stories or case studies from successful opportunities or sales, which can be used as marketing content or for internal knowledge sharing. 14:20 Nikita: And what about Oracle's Agentic AI? What are its capabilities across the different pillars? Yunus: In HCM, Agentic AI handles the end-to-end onboarding experience, from explaining policies to guiding document submissions, even booking orientation sessions, allowing the HR staff to focus on human engagement. It can further support HR teams during performance review cycles by surfacing high potential employees, pulling in performance data, and recommending next actions like promotions or learning paths. It helps manage time with requests by checking eligibility, policy constraints, and suggesting appropriate substitutes, reducing administrative frictions and errors. With SCM, the Agentic AI Fusion Applications act as a real time Assistant to ensure buyers follow procurement policies, and reducing compliance risk and manual errors. It can also support sales representatives with real-time insights and next best actions during the quoting or ordering process, improving customer satisfaction and sales performance. With ERP, you can handle document intake, extraction, and routing, saving significant time on manual document management across financial functions using Fusion Applications. AI automates reconciliation tasks by matching transactions, flagging anomalies, and suggesting resolutions. It helps you in reducing close cycle timelines and continuously analyzes profit margins. And it recommends the pricing adjustments that can be done in your ERPs. In CX, the Agentic AI Fusion Application supports staff by instantly compiling full customer histories, orders, service requests, interactions, and can act like a real-time assistant, summarizing open tickets and resolutions, helping agents take over or escalate without needing to dig through the notes, and dynamically adjust technicals and technician routes based on traffic, priority, or cancelation, increasing the field efficiency and customer satisfaction. 17:04 Lois: Thank you so much, Yunus. To learn more about the topics covered today, visit mylearn.oracle.com and search for the AI for You course. Nikita: Join us next week as we cover how AI is being applied across sectors like healthcare, finance, and retail, and tackle the big question: how do we keep these technologies aligned with human values? Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 17:30 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to explore Oracle's approach to enterprise AI. The conversation covers the essential components of the Oracle AI stack and how each part, from the foundational infrastructure to business-specific applications, can be leveraged to support AI-driven initiatives. They also delve into Oracle's suite of AI services, including generative AI, language processing, and image recognition. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we discussed why the decision to buy or build matters in the world of AI deployment. Lois: That's right, Niki. Today is all about the Oracle AI stack and how it empowers not just developers and data scientists, but everyday business users as well. Then we'll spend some time exploring Oracle AI services in detail. 01:00 Nikita: Yunus Mohammed, our Principal Instructor, is back with us today. Hi Yunus! Can you talk about the different layers in Oracle's end-to-end AI approach? Yunus: The first base layer is the foundation of AI infrastructure, the powerful compute and storage layer that enables scalable model training and inferences. Sitting above the infrastructure, we have got the data platform. This is where data is stored, cleaned, and managed. Without a reliable data foundation, AI simply can't perform. So base of AI is the data, and the reliable data gives more support to the AI to perform its job. Then, we have AI and ML services. These provide ready-to-use tools for building, training, and deploying custom machine learning models. Next, to the AI/ML services, we have got generative AI services. This is where Oracle enables advanced language models and agentic AI tools that can generate content, summarize documents, or assist users through chat interfaces. Then, we have the top layer, which is called as the applications, things like Fusion applications or industry specific solutions where AI is embedded directly into business workflows for recommendations, forecasting or customer support. Finally, Oracle integrates with a growing ecosystem of AI partners, allowing organizations to extend and enhance their AI capabilities even further. In short, Oracle doesn't just offer AI as a feature. It delivers it as a full stack capability from infrastructure to the layer of applications. 02:59 Nikita: Ok, I want to get into the core AI services offered by Oracle Cloud Infrastructure. But before we get into the finer details, broadly speaking, how do these services help businesses? Yunus: These services make AI accessible, secure, and scalable, enabling businesses to embed intelligence into workflows, improve efficiency, and reduce human effort in repetitive or data-heavy tasks. And the best part is, Oracle makes it easy to consume these through application interfaces, APIs, software development kits like SDKs, and integration with Fusion Applications. So, you can add AI where it matters without needing a data scientist team to do that work. 03:52 Lois: So, let's get down to it. The first core service is Oracle's Generative AI service. What can you tell us about it? Yunus: This is a fully managed service that allows businesses to tap into the power of large language models. You can actually work with these models from scratch to a well-defined develop model. You can use these models for a wide range of use cases like summarizing text, generating content, answering questions, or building AI-powered chat interfaces. 04:27 Lois: So, what will I find on the OCI Generative AI Console? Yunus: OCI Generative AI Console highlights three key components. The first one is the dedicated AI cluster. These are GPU powered environments used to fine tune and host your own custom models. It gives you control and performance at scale. Then, the second point is the custom models. You can take a base language model and fine tune it using your own data, for example, company manuals or HR policies or customer interactions, which are your own personal data. You can use this to create a model that speaks your business language. And last but not the least, the endpoints. These are the interfaces through which your application connect to the model. Once deployed, your app can query the model securely and at different scales, and you don't need to be a developer to get started. Oracle offers a playground, which is a non-core environment where you can try out models, craft parameters, and test responses interactively. So overall, the generative AI service is designed to make enterprise-grade AI accessible and customizable. So, fitting directly into business processes, whether you are building a smart assistant or you're automating the content generation process. 06:00 Lois: The next key service is OCI Generative AI Agents. Can you tell us more about it? Yunus: OCI Generative AI agents combines a natural language interface with generative AI models and enterprise data stores to answer questions and take actions. The agent remembers the context, uses previous interactions, and retrieves deeper product speech details. They aren't just static chat bots. They are context aware, grounded in business data, and able to handle multi-turns, follow-up queries with relevant accurate responses, and driving productivity and decision-making across departments like sales, support, or operations. 06:54 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 07:37 Nikita: Welcome back! Yunus, let's move on to the OCI Language service. Yunus: OCI Language helps business understand and process natural language at scale. It uses pretrained models, which means they are already trained on large industry data sets and are ready to be used right away without requiring AI expertise. It detects over 100 languages, including English, Japanese, Spanish, and more. This is great for global business that receive multilingual inputs from customers. It works with identity sentiments. For different aspects of the sentence, for example, in a review like, "The food was great, but the service sucked," OCI Language can tell that food has a positive sentiment while service has a negative one. This is called aspect-based sentiment analysis, and it is more insightful than just labeling the entire text as positive or negative. Then we have got to identify key phrases representing important ideas or subjects. So, it helps in extracting these key phrases, words, or terms that capture the core messages. They help automate tagging, summarizing, or even routing of content like support tickets or emails. In real life, the businesses are using this for customer feedback analysis, support ticket routing, social media monitoring, and even regulatory compliances. 09:21 Nikita: That's fantastic. And what about the OCI Speech service? Yunus: The OCI Speech is an AI service that transcribes speech to text. Think of it as an AI-powered transcription engine that listens to the spoken English, whether in audio or video files, and turns it into usable and searchable and readable text. It provides timestamps, so you know exactly when something was said. A valuable feature for reviewing legal discussions, media footages, or compliance audits. OCI Speech even understands different speakers. You don't need to train this from scratch. It is pre-trained model hosted on an API. Just send your audio to the service, and you get an accurate timestamp text back in return. 10:17 Lois: I know we also have a service for object detection… called OCI Vision? Yunus: OCI Vision uses pretrained, deep learning models to understand and analyze visual content. Just like a human might, you can upload an image or videos, and the AI can tell you what is in it and where they might be useful. There are two primary use cases, which you can use this particular OCI Vision for. One is for object detection. You have got a red color car. So OCI Vision is not just identifying that's a car. It is detecting and labeling parts of the car too, like the bumper, the wheels, the design components. This is a critical in industries like manufacturing, retail, or logistics. For example, in quality control, OCI Vision can scan product images to detect missing or defective parts automatically. Then we have got the image classification. This is useful in scenarios like automated tagging of photos, managing digital assets, classifying this particular scene or context of this particular scene. So basically, when we talk about OCI Vision, which is actually a fully managed, no complex model training is required for this particular service. It's available via API. It is also working with defining their own custom model for working with the environments. 11:51 Nikita: And the final service is related to text and called OCI Document Understanding, right? Yunus: So OCI Document Understanding allows businesses to automatically extract structured insights from unstructured documents like invoices, contracts, recipes, and also sometimes resumes, or even business documents. 12:13 Nikita: And how does it work? Yunus: OCI reads the content from the scanned document. The OCR is smarter. It recognizes both printed and handwritten text. Then determines what type of document it is. So document classification is done. Text recognition recognizes text, then classifies the document. For example, if this is a purchase order, or bank statement, or any medical report. If your business handles documents in multiple languages, then the AI can actually help in language detection also, which helps you in routing the language or translating that particular language. Many documents contain structured data in table format. Think pricing tables or line items. OCI will help you in extracting these with high accuracy for reporting on feeding into ERP systems. And finally, I would say the key value extraction. It puts our critical business values like invoice numbers, payment amounts, or customer names from fields that may not always allow a fixed format. So, this service reduces the need for manual review, cuts down processes time, and ensures high accuracy for your system. 13:36 Lois: What are the key takeaways our listeners should walk away with after this episode? Yunus: The first one, Oracle doesn't treat AI as just a standalone tool. Instead, AI is integrated from the ground up. Whether you're talking about infrastructure, data platforms, machine learning services, or applications like HCM, ERP, or CX. In real world, the Oracle AI Services prioritize data management, security, and governance, all essential for enterprise AI use cases. So, it is about trust. Can your AI handle sensitive data? Can it comply with regulations? Oracle builds its AI services with strong foundation in data governance, robust security measures, and tight control over data residency and access. So this makes Oracle AI especially well-suited for industries like health care, finance, logistics, and government, where compliance and control aren't optional. They are critical. 14:44 Nikita: Thank you for another great conversation, Yunus. If you're interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the AI for You course. Lois: In our next episode, we'll get into Predictive AI, Generative AI, Agentic AI, all with respect to Oracle Fusion Applications. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:10 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
How do you decide whether to buy a ready-made AI solution or build one from the ground up? The choice is more than just a technical decision; it's about aligning AI with your business goals. In this episode, Lois Houston and Nikita Abraham are joined by Principal Instructor Yunus Mohammed to examine the critical factors influencing the buy vs. build debate. They explore real-world examples where businesses must weigh speed, customization, and long-term strategy. From a startup using a SaaS chatbot to a bank developing a custom fraud detection model, Yunus provides practical insights on when to choose one approach over the other. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! Last week, we spoke about the key stages in a typical AI workflow and how data quality, feedback loops, and business goals influence AI success. 00:50 Nikita: In today's episode, we're going to explore whether you should buy or build AI apps. Joining us again is Principal Instructor Yunus Mohammed. Hi Yunus, let's jump right in. Why does the decision of buy versus build matter? Yunus: So when we talk about buy versus build matters, we need to consider the strategic business decisions over here. They are related to the strategic decisions which the business makes, and it is evaluated in the decision lens. So the center of the decision lens is the business objective, which identifies what are we trying to solve. Then evaluate our constraints based on that particular business objective like the cost, the time, and the talent. And finally, we can decide whether we need to buy or build. But remember, there is no single correct answer. What's right for one business may not be working for the other one. 01:54 Lois: OK, can you give us examples of both approaches? Yunus: The first example where we have got a startup using a SaaS AI chatbot. Now, being a startup, they have to choose a ready-made solution, which is an AI chatbot. Now, the question is, why did they do this? Because speed and simplicity mattered more than deep customization that is required for the chatbot. So, their main aim was to have it ready in short period of time and make it more simpler. And this actually lead them to get to the market fast with low upfront cost and minimal technical complexities. But in some situations, it might be different. Like, your bank, which needs to build a fraud model. It cannot be outsourced or got from the shelf. So, they build a custom model in-house. With this custom model, they actually have a tighter control, and it is tuned to their standards. And it is created by their experts. So these two generic examples, the chatbot and the fraud model example, helps you in identifying whether I should go for a SaaS product with simple choice of selecting an existing LLM endpoint and not making any changes. Or should I go with model depending on my business and organization requirement and fine tuning that model later to define a better implementation of the scenarios or conditions that I want to do which are specific to my organization. So here you decide with the reference whether I want it to be done faster, or whether I want to be more customized to my organization. So buy it, when it is generic, or build when it is strategic. The SaaS, which is basically software as a service, refers to ready to use cloud-based applications that you access via internet. You can log into the platform and use the built-in AI, there's no setup requirement for those. Real-world examples can be Oracle Fusion apps with AI features enabled. So in-house integration means embedding AI with my own requirements into your own systems, often using custom APIs, data pipelines, and hosting it. It gives you more flexibility but requires a lot of resources and expertise. So real-world example for this scenario can be a logistics heavy company, which is integrating a customer support model into their CX. 04:41 Lois: But what are the pros and cons of each approach? Yunus: So, SaaS and Fusion Applications, basically, they offer fast deployment with little to no coding required, making them ideal for business looking to get started quickly and faster. And they typically come with lower upfront costs and are maintained by vendor, which means updates, security, support are handled externally. However, there are limited customizations and are best suited for common, repeatable use cases. Like, it can be a standard chatbot, or it can be reporting tools, or off the shelf analytics that you want to use. But the in-house or custom integration, you have more control, but it takes longer to build and requires a higher initial investment. The in-house or custom integration approach allows full customization of the features and the workflows, enabling you to design and tailor the AI system to your specific needs. 05:47 Nikita: If you're weighing the choice between buying or building, what are the critical business considerations you'd need to take into account? Yunus: So let's take one of the key business consideration which is time to market. If your goal is to launch fast, maybe you're a startup trying to gain traction quickly, then a prebuilt plug and play AI solution, for example, a chatbot or any other standard analytical tool, might be your best bet. But if you have time and you are aiming for precision, a custom model could be worth the wait. Prebuilt SaaS tools usually have lower upfront costs and a subscription model. It works with putting subscriptions. Custom solutions, on the other hand, may require a bigger investment upfront. In development, you require high talent and infrastructures, but could offer cost savings in the long run. So, ask yourself a question here. Is this AI helping us stand out in the market? If the answer is yes, you may want to build something which is your proprietary. For example, an organization would use a generic recommendation engine. It's a part of their secret sauce. Some use cases require flexibility, like you want to tailor the rules to match your specific risk criteria. So, under that scenarios, you will go for customizing. So, you will go with off the shelf solutions may not give you deep enough requirements that you want to evaluate. So, you get those and you try to customize those. You can go for customization of your AI features. The other important key business consideration is the talent and expertise that your organization have. So, the question that you need to ask in the organization is, do you have an internal team who is well versed in developing AI solutions for you? Or do you have access to one of the teams which can help you build your own proprietary products? If not, you'll go with SaaS. If you do have, then building could unlock greater control over your AI features and AI models. The next core component is your security and data privacy. If you're handling sensitive information, like for example, the health care or finance data, you might not want to send your data to the third-party tools. So in-house models offer better control over data security and compliance. When we leverage a model, it could be a prebuilt or custom model. 08:50 Oracle University is proud to announce three brand new courses that will help your teams unlock the power of Redwood—the next generation design system. Redwood enhances the user experience, boosts efficiency, and ensures consistency across Oracle Fusion Cloud Applications. Whether you're a functional lead, configuration consultant, administrator, developer, or IT support analyst, these courses will introduce you to the Redwood philosophy and its business impact. They'll also teach you how to use Visual Builder Studio to personalize and extend your Fusion environment. Get started today by visiting mylearn.oracle.com. 09:31 Nikita: Welcome back! So, getting back to what you were saying before the break, what are pre-built and custom models? Yunus: A prebuilt model is an AI solution that has already been trained by someone else, typically a tech provider. It can be used to perform a specific task like recognizing images, translating text, or detecting sentiments. You can think of it like buying a preassembled appliance. You plug it in, configure a few settings, and it's ready to use. You don't need to know how the internal parts work. You benefit from the speed, ease, and reliability of this particular model, which is a prebuilt model. But you can't easily change how it works under the hood. Whereas, a custom model is an AI solution that your organization designs and trains and tunes specifically for their business problems using their own data. You can think of it like designing your own suit. It takes more time and effort to create. It is built to your exact measurements and needs. And you have full control over how it performs and evolves. 10:53 Lois: So, when would you choose a pre-built versus a custom model? Yunus: Depending on speed, simplicity, control, and customization, you can decide on using a prebuilt or to create a custom model. Prebuilt models are like plug and play solutions. Think of tools like Google Translate for languages. OpenAI APIs for summarizing sentiment analysis or chatbots, they are quick to deploy, require low technical effort, great for getting started fast, but they also have limits. Customization is very minimal, and you may not be able to fine tune it to your specific tone or business logic. These work well when the problem is common and nonstrategic, like, scanning documents or auto tagging images. The custom-build model, on the other hand, is a model that is built from the ground up. Using your own data and objectives, they take longer, and they require technical expertise. But they offer precise control, full alignment with your business needs. And these are ideal when you are dealing with sensitive data, competitive workflows, highly specific customer interactions. For example, a bank may build a custom model which can be used for fraud detection, which can be tuned to their exact transaction standards and the patterns of their transactions. 12:37 Nikita: What if someone wants the best of both worlds? Yunus: We've also got a hybrid approach. In hybrid approach, we actually talk about the adaptation of AI with a strategy which is termed as hybrid strategy. Many companies today don't start by building AI from scratch. Instead, they begin with prebuilt models, like using an API, which can be already performing tasks like summarizing, translating, or answering questions using generic knowledge. This set will help you in getting up and running quickly with a small level results. As your business matures, you can start to layer in your custom data. Think internal policies, frequently asked questions, or customer interactions. And then you can fine tune the model to behave the way your business needs it to behave. Now, your AI starts producing business-ready output, smarter, more relevant, and aligned with your tone, brand, or compliance needs. 13:45 Lois: Ok…let's think of AI deployment in the hybrid approach as following a pyramid or ladder like structure. Can you take us through the different levels? Yunus: So, on the top, quick start, minimal setup, great for business automation, which can be used as a pilot use case. So, if I'm taking off the shelf APIs or platforms, they can be giving me a faster, less set of requirements, and they are basically acting like a pilot use. Later, you can add your own data or logic so you can add your data. You can fine tune or change your business logic. And this is where fine tuning and prompt engineering helps tailor the AI to your workflows and your language. And then at the end, which is at the bottom, you build your own model. It is reserved for core capabilities or competitive advantages where total control and differentiation matters in building that particular model. You don't need to go all in from one day. So, start with what is available, like, use an off shelf, API, or platform, customize as you grow. Build only when it gives you a true edge. This is what we call the best of both worlds, build and buy. 15:05 Lois: Thank you so much, Yunus, for joining us again. To learn more about the topics covered today, visit mylearn.oracle.com and search for the AI for You course. Nikita: Join us next week for another episode of the Oracle University Podcast where we discuss the Oracle AI stack and Oracle AI services. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 15:29 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI's real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we're going to look at the key stages in a typical AI workflow. We'll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University. 01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model? Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately. After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting. So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results. 04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development? Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data. Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches? Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data? Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart. 08:23 Lois: So, we've established that collecting the right data is non-negotiable for success. Then comes preparing it, right? Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format. 10:31 Lois: And does each AI system have a different way of preparing data? Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem? Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets. Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model's been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk. So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural? The model improves with the accuracy and the number of epochs the training has been done on. 15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job. The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable. Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn't the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it's about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data. 20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you're interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course. Lois: That's right. You'll find skill checks to help you assess your understanding of these concepts. In our next episode, we'll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we'll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what's the difference between traditional AI and generative AI? 01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction. But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you. Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work. Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create. 02:16 Lois: How did traditional AI progress to the generative AI we know today? Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex. Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc. Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content. Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code. 03:55 Nikita: Himanshu, what motivated this sort of progression? Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models. And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas. 04:25 Lois: So, what's happening behind the scenes? How is gen AI making these things happen? Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs. They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos. And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities. It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes. 05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let's talk more about them. Himanshu, what is an LLM and how does it work? Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before. This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories. 06:06 Nikita: But what's large about this? Why's it called a large language model? Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning. There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human. 06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work? Himanshu: Diffusion models start with something that looks like pure random noise. Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside. And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict. 07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com. 07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I'm sure the uses go far beyond that, right? Can you walk us through some of them? Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems. Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents. The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts. The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes. For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization. Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output. Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio. Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations. 10:42 Lois: That's great! So, we've established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general? Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic. Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions. Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased. And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it. Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught. The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission. In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect. Creativity and caution must go together. 12:50 Nikita: Let's move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool? Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done. So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays. Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act. 14:02 Nikita: How does a gen AI agent work under the hood? Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search. Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent. 14:38 Lois: Himanshu, we've seen people use the terms generative AI agents and agentic AI interchangeably. What's the difference between the two? Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time. Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies. So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions. Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions. On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts. Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models. So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative. 16:39 Lois: What makes a generative AI agent actually work? Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role. The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting. Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt. Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on. The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path. And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder. 18:36 Nikita: And where are generative AI agents being used? Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history. For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes. For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage. For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount. 20:02 Nikita: Himanshu, let's talk about the capabilities and benefits of generative AI agents. Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases. They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts. In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week. 21:06 Lois: Ok. And what are the risks of using generative AI agents? Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness. For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal. So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks. 22:10 Lois: Thank you so much, Himanshu, for educating us. We've had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course. Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 22:32 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
In this episode, Lois Houston and Nikita Abraham continue their discussion on AI fundamentals, diving into Data Science with Principal AI/ML Instructor Himanshu Raj. They explore key concepts like data collection, cleaning, and analysis, and talk about how quality data drives impactful insights. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we began our exploration of core AI concepts, specifically machine learning and deep learning. I'd really encourage you to go back and listen to the episode if you missed it. 00:52 Lois: Yeah, today we're continuing that discussion, focusing on data science, with our Principal AI/ML Instructor Himanshu Raj. Nikita: Hi Himanshu! Thanks for joining us again. So, let's get cracking! What is data science? 01:06 Himanshu: It's about collecting, organizing, analyzing, and interpreting data to uncover valuable insights that help us make better business decisions. Think of data science as the engine that transforms raw information into strategic action. You can think of a data scientist as a detective. They gather clues, which is our data. Connect the dots between those clues and ultimately solve mysteries, meaning they find hidden patterns that can drive value. 01:33 Nikita: Ok, and how does this happen exactly? Himanshu: Just like a detective relies on both instincts and evidence, data science blends domain expertise and analytical techniques. First, we collect raw data. Then we prepare and clean it because messy data leads to messy conclusions. Next, we analyze to find meaningful patterns in that data. And finally, we turn those patterns into actionable insights that businesses can trust. 02:00 Lois: So what you're saying is, data science is not just about technology; it's about turning information into intelligence that organizations can act on. Can you walk us through the typical steps a data scientist follows in a real-world project? Himanshu: So it all begins with business understanding. Identifying the real problem we are trying to solve. It's not about collecting data blindly. It's about asking the right business questions first. And once we know the problem, we move to data collection, which is gathering the relevant data from available sources, whether internal or external. Next one is data cleaning. Probably the least glamorous but one of the most important steps. And this is where we fix missing values, remove errors, and ensure that the data is usable. Then we perform data analysis or what we call exploratory data analysis. Here we look for patterns, prints, and initial signals hidden inside the data. After that comes the modeling and evaluation, where we apply machine learning or deep learning techniques to predict, classify, or forecast outcomes. Machine learning, deep learning are like specialized equipment in a data science detective's toolkit. Powerful but not the whole investigation. We also check how good the models are in terms of accuracy, relevance, and business usefulness. Finally, if the model meets expectations, we move to deployment and monitoring, putting the model into real world use and continuously watching how it performs over time. 03:34 Nikita: So, it's a linear process? Himanshu: It's not linear. That's because in real world data science projects, the process does not stop after deployment. Once the model is live, business needs may evolve, new data may become available, or unexpected patterns may emerge. And that's why we come back to business understanding again, defining the questions, the strategy, and sometimes even the goals based on what we have learned. In a way, a good data science project behaves like living in a system which grows, adapts, and improves over time. Continuous improvement keeps it aligned with business value. Now, think of it like adjusting your GPS while driving. The route you plan initially might change as new traffic data comes in. Similarly, in data science, new information constantly help refine our course. The quality of our data determines the quality of our results. If the data we feed into our models is messy, inaccurate, or incomplete, the outputs, no matter how sophisticated the technology, will be also unreliable. And this concept is often called garbage in, garbage out. Bad input leads to bad output. Now, think of it like cooking. Even the world's best Michelin star chef can't create a masterpiece with spoiled or poor-quality ingredients. In the same way, even the most advanced AI models can't perform well if the data they are trained on is flawed. 05:05 Lois: Yeah, that's why high-quality data is not just nice to have, it's absolutely essential. But Himanshu, what makes data good? Himanshu: Good data has a few essential qualities. The first one is complete. Make sure we aren't missing any critical field. For example, every customer record must have a phone number and an email. It should be accurate. The data should reflect reality. If a customer's address has changed, it must be updated, not outdated. Third, it should be consistent. Similar data must follow the same format. Imagine if the dates are written differently, like 2024/04/28 versus April 28, 2024. We must standardize them. Fourth one. Good data should be relevant. We collect only the data that actually helps solve our business question, not unnecessary noise. And last one, it should be timely. So data should be up to date. Using last year's purchase data for a real time recommendation engine wouldn't be helpful. 06:13 Nikita: Ok, so ideally, we should use good data. But that's a bit difficult in reality, right? Because what comes to us is often pretty messy. So, how do we convert bad data into good data? I'm sure there are processes we use to do this. Himanshu: First one is cleaning. So this is about correcting simple mistakes, like fixing typos in city names or standardizing dates. The second one is imputation. So if some values are missing, we fill them intelligently, for instance, using the average income for a missing salary field. Third one is filtering. In this, we remove irrelevant or noisy records, like discarding fake email signups from marketing data. The fourth one is enriching. We can even enhance our data by adding trusted external sources, like appending credit scores from a verified bureau. And the last one is transformation. Here, we finally reshape data formats to be consistent, for example, converting all units to the same currency. So even messy data can become usable, but it takes deliberate effort, structured process, and attention to quality at every step. 07:26 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 08:10 Nikita: Welcome back! Himanshu, we spoke about how to clean data. Now, once we get high-quality data, how do we analyze it? Himanshu: In data science, there are four primary types of analysis we typically apply depending on the business goal we are trying to achieve. The first one is descriptive analysis. It helps summarize and report what has happened. So often using averages, totals, or percentages. For example, retailers use descriptive analysis to understand things like what was the average customer spend last quarter? How did store foot traffic trend across months? The second one is diagnostic analysis. Diagnostic analysis digs deeper into why something happened. For example, hospitals use this type of analysis to find out, for example, why a certain department has higher patient readmission rates. Was it due to staffing, post-treatment care, or patient demographics? The third one is predictive analysis. Predictive analysis looks forward, trying to forecast future outcomes based on historical patterns. For example, energy companies predict future electricity demand, so they can better manage resources and avoid shortages. And the last one is prescriptive analysis. So it does not just predict. It recommends specific actions to take. So logistics and supply chain companies use prescriptive analytics to suggest the most efficient delivery routes or warehouse stocking strategies based on traffic patterns, order volume, and delivery deadlines. 09:42 Lois: So really, we're using data science to solve everyday problems. Can you walk us through some practical examples of how it's being applied? Himanshu: The first one is predictive maintenance. It is done in manufacturing a lot. A factory collects real time sensor data from machines. Data scientists first clean and organize this massive data stream, explore patterns of past failures, and design predictive models. The goal is not just to predict breakdowns but to optimize maintenance schedules, reducing downtime and saving millions. The second one is a recommendation system. It's prevalent in retail and entertainment industries. Companies like Netflix or Amazon gather massive user interaction data such as views, purchases, likes. Data scientists structure and analyze this behavioral data to find meaningful patterns of preferences and build models that suggest relevant content, eventually driving more engagement and loyalty. The third one is fraud detection. It's applied in finance and banking sector. Banks store vast amounts of transaction record records. Data scientists clean and prepare this data, understand typical spending behaviors, and then use statistical techniques and machine learning to spot unusual patterns, catching fraud faster than manual checks could ever achieve. The last one is customer segmentation, which is often applied in marketing. Businesses collect demographics and behavioral data about their customers. Instead of treating all the customers same, data scientists use clustering techniques to find natural groupings, and this insight helps businesses tailor their marketing efforts, offers, and communication for each of those individual groups, making them far more effective. Across all these examples, notice that data science isn't just building a model. Again, it's understanding the business need, reviewing the data, analyzing it thoughtfully, and building the right solution while helping the business act smarter. 11:44 Lois: Thank you, Himanshu, for joining us on this episode of the Oracle University Podcast. We can't wait to have you back next week for part 3 of this conversation on core AI concepts, where we'll talk about generative AI and gen AI agents. Nikita: And if you want to learn more about data science, visit mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 12:13 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we're going to explore some foundational AI concepts, starting with machine learning. After that, we'll discuss the two main machine learning models: supervised learning and unsupervised learning. And we'll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today's episode. Hi Himanshu! Let's dive right in. What is machine learning? 01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale. 01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience. Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you're saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let's move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not. 04:55 Nikita: Ok, I think we need an example here. Let's go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we're with you so far. Let's talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend. 08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to. Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known. 08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own. The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own inferences. 09:55 Lois: Where is unsupervised machine learning used? Can you take us through some of the use cases? Himanshu: The first one is product recommendation. Customers are grouped based on shared behavior even without knowing their intent. This helps show what the other users like you also prefer. Second one is anomaly detection. Unusual patterns, such as fraud, network breaches, or manufacturing defects, can stand out, all without needing thousands of labeled examples. And third one is customer segmentation. Customers can be grouped by purchase history or behavior to tailor experiences, pricing, or marketing campaigns. 10:32 Lois: And finally, we come to deep learning. What is deep learning, Himanshu? Himanshu: Humans learn from experience by seeing patterns repeatedly. Brain learns to recognize an image by seeing it many times. The human brain contains billions of neurons. Each neuron is connected to others through synapses. Neurons communicate by passing signals. The brain adjusts connections based on repeated stimuli. Deep learning was inspired by how the brain works using artificial neurons and connections. Just like our brains need a lot of examples to learn, so do the deep learning models. The more the layers and connections are, the more complex patterns it can learn. The brain is not hard-coded. It learns from patterns. Deep learning follows the same idea. Metaphorically speaking, a deep learning model can have over a billion neurons, more than a cat's brain, which have around 250 million neurons. Here, the neurons are mathematical units, often called nodes, or simply as units. Layers of these units are connected, mimicking how biological neurons interact. So deep learning is a type of machine learning where the computer learns to understand complex patterns. What makes it special is that it uses neural networks with many layers, which is why we call it deep learning. 11:56 Lois: And how does deep learning work? Himanshu: Deep learning is all about finding high-level meaning from low-level data layer by layer, much like how our brains process what we see and hear. A neural network is a system of connected artificial neurons, or nodes, that work together to learn patterns and make decisions. 12:15 Nikita: I know there are different types of neural networks, with ANNs or Artificial Neural Networks being the one for general learning. How is it structured? Himanshu: There is an input layer, which is the raw data, which could be an image, sentence, numbers, a hidden layer where the patterns are detected or the features are learned, and the output layer where the final decision is made. For example, given an image, is this a dog? A neural network is like a team of virtual decision makers, called artificial neurons, or nodes, working together, which takes input data, like a photo, and passes it through layers of neurons. And each neuron makes a small judgment and passes its result to the next layer. This process happens across multiple layers, learning more and more complex patterns as it goes, and the final layer gives the output. Imagine a factory assembly line where each station, or the layer, refines the input a bit more. By the end, you have turned raw parts into something meaningful. And this is a very simple analogy. This structure forms the foundations of many deep learning models. More advanced architectures, like convolutional neural networks, CNNs, for images, or recurrent neural networks, RNN, for sequences built upon this basic idea. So, what I meant is that the ANN is the base structure, like LEGO bricks. CNNs and RNNs use those same bricks, but arrange them in a way that are better suited for images, videos, or sequences like text or speech. 13:52 Nikita: So, why do we call it deep learning? Himanshu: The word deep in deep learning does not refer to how profound or intelligent the model is. It actually refers to the number of layers in the neural network. It starts with an input layer, followed by hidden layers, and ends with an output layer. The layers are called hidden, in the sense that these are black boxes and their data is not visible or directly interpretable to the user. Models which has only one hidden layer is called shallow learning. As data moves, each layer builds on what the previous layer has learned. So layer one might detect a very basic feature, like edges or colors in an image. Layer two can take those edges and starts forming shapes, like curves or lines. And layer three use those shapes to identify complete objects, like a face, a car, or a person. This hierarchical learning is what makes deep learning so powerful. It allows the model to learn abstract patterns and generalize across complex data, whether it's visual, audio, or even language. And that's the essence of deep learning. It's not just about layers. It's about how each layer refines the information and one step closer to understanding. 15:12 Nikita: Himanshu, where does deep learning show up in our everyday lives? Himanshu: Deep learning is not just about futuristic robots, it's already powering the tools we use today. So think of when you interact with a virtual assistant on a website. Whether you are booking a hotel, resolving a banking issue, or asking customer support questions, behind the scenes, deep learning models understand your text, interpret your intent, and respond intelligently. There are many real-life examples, for example, ChatGPT, Google's Gemini, any airline website's chatbots, bank's virtual agent. The next one is speech-to-text systems. Example, if you have ever used voice typing on your phone, dictated a message to Siri, or used Zoom's live captions, you have seen this in action already. The system listens to your voice and instantly converts it into a text. And this saves time, enhances accessibility, and helps automate tasks, like meeting transcriptions. Again, you would have seen real-life examples, such as Siri, Google Assistant, autocaptioning on Zoom, or YouTube Live subtitles. And lastly, image recognition. For example, hospitals today use AI to detect early signs of cancer in x-rays and CT scans that might be missed by the human eye. Deep learning models can analyze visual patterns, like a suspicious spot on a lung's X-ray, and flag abnormalities faster and more consistently than humans. Self-driving cars recognize stop signs, pedestrians, and other vehicles using the same technology. So, for example, cancer detection in medical imaging, Tesla's self-driving navigation, security system synchronizes face are very prominent examples of image recognition. 17:01 Lois: Deep learning is one of the most powerful tools we have today to solve complex problems. But like any tool, I'm sure it has its own set of pros and cons. What are its advantages, Himanshu? Himanshu: It is high accuracy. When trained with enough data, deep learning models can outperform humans. For example, again, spotting early signs of cancer in X-rays with higher accuracy. Second is handling of unstructured data. Deep learning shines when working with messy real-world data, like images, text, and voice. And it's why your phone can recognize your face or transcribe your speech into text. The third one is automatic pattern learning. Unlike traditional models that need hand-coded features, deep learning models figure out important patterns by themselves, making them extremely flexible. And the fourth one is scalability. Once trained, deep learning systems can scale easily, serving millions of customers, like Netflix recommending movies personalized to each one of us. 18:03 Lois: And what about its challenges? Himanshu: The first one is data and resource intensive. So deep learning demands huge amount of labeled data and powerful computing hardware, which means high cost, especially during training. The second thing is lacks explainability. These models often act like a black box. We know the output, but it's hard to explain exactly how the model reached that decision. This becomes a problem in areas like health care and finance where transparency is critical. The third challenge is vulnerability to bias. If the data contains biases, like favoring certain groups, the model will learn and amplify those biases unless we manage them carefully. The fourth and last challenge is it's harder to debug and maintain. Unlike a traditional software program, it's tough to manually correct a deep learning model if it starts behaving unpredictably. It requires retraining with new data. So deep learning offers powerful opportunities to solve complex problems using data, but it also brings challenges that require careful strategy, resources, and responsible use. 19:13 Nikita: We're taking away a lot from this conversation. Thank you so much for your insights, Himanshu. Lois: If you're interested to learn more, make sure you log into mylearn.oracle.com and look for the AI for You course. Join us next week for part 2 of the discussion on AI Concepts & Terminology, where we'll focus on Data Science. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 19:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
O
Oracle University Podcast
In this episode, hosts Lois Houston and Nikita Abraham, together with Senior Cloud Engineer Nick Commisso, break down the basics of artificial intelligence (AI). They discuss the differences between Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI), and explore the concepts of machine learning, deep learning, and generative AI. Nick also shares examples of how AI is used in everyday life, from navigation apps to spam filters, and explains how AI can help businesses cut costs and boost revenue. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. I'm so excited about this one because we're going to dive into the world of artificial intelligence, speaking to many experts in the field. Nikita: If you've been listening to us for a while, you probably know we've covered AI from a bunch of different angles. But this time, we're dialing it all the way back to basics. We wanted to create something for the absolute beginner, so no jargon, no assumptions, just simple conversations that anyone can follow. 01:08 Lois: That's right, Niki. You don't need to have a technical background or prior experience with AI to get the most out of these episodes. In our upcoming conversations, we'll break down the basics of AI, explore how it's shaping the world around us, and understand its impact on your business. Nikita: The idea is to give you a practical understanding of AI that you can use in your work, especially if you're in sales, marketing, operations, HR, or even customer service. 01:37 Lois: Today, we'll talk about the basics of AI with Senior Cloud Engineer Nick Commisso. Hi Nick! Welcome back to the podcast. Can you tell us about human intelligence and how it relates to artificial intelligence? And within AI, I know we have Artificial General Intelligence, or AGI, and Artificial Narrow Intelligence, or ANI. What's the difference between the two? Nick: Human intelligence is the intellectual capability of humans that allow us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using language and understand non-verbal cues, such as facial expressions, tone variation, body language. We can handle objections and situations in real time, even in a complex setting. We can plan for short and long-term situations or projects. And we can create music, art, or invent something new or have original ideas. If machines can replicate a wide range of human cognitive abilities, such as learning, reasoning, or problem solving, we call it artificial general intelligence. Now, AGI is hypothetical for now, but when we apply AI to solve problems with specific, narrow objectives, we call it artificial narrow intelligence, or ANI. AGI is a hypothetical AI that thinks like a human. It represents the ultimate goal of artificial intelligence, which is a system capable of chatting, learning, and even arguing like us. If AGI existed, it would take the form like a robot doctor that accurately diagnoses and comforts patients, or an AI teacher that customizes lessons in real time based on each student's mood, pace, and learning style, or an AI therapist that comprehends complex emotions and provides empathetic, personalized support. ANI, on the other hand, focuses on doing one thing really well. It's designed to perform specific tasks by recognizing patterns and following rules, but it doesn't truly understand or think beyond its narrow scope. Think of ANI as a specialist. Your phone's face ID can recognize you instantly, but it can't carry on a conversation. Google Maps finds the best route, but it can't write you a poem. And spam filters catch junk mail, but it can't make you coffee. So, most of the AI you interact with today is ANI. It's smart, efficient, and practical, but limited to specific functions without general reasoning or creativity. 04:22 Nikita: Ok then what about Generative AI? Nick: Generative AI is a type of AI that can produce content such as audio, text, code, video, and images. ChatGPT can write essays, but it can't fact check itself. DALL-E creates art, but it doesn't actually know if it's good. Or AI song covers can create deepfakes like Drake singing "Baby Shark." 04:47 Lois: Why should I care about AI? Why is it important? Nick: AI is already part of your everyday life, often working quietly in the background. ANI powers things like navigation apps, voice assistants, and spam filters. Generative AI helps create everything from custom playlists to smart writing tools. And while AGI isn't here yet, it's shaping ideas about what the future might look like. Now, AI is not just a buzzword, it's a tool that's changing how we live, work, and interact with the world. So, whether you're using it or learning about it or just curious, it's worth knowing what's behind the tech that's becoming part of everyday life. 05:32 Lois: Nick, whenever people talk about AI, they also throw around terms like machine learning and deep learning. What are they and how do they relate to AI? Nick: As we shared earlier, AI is the ability of machines to imitate human intelligence. And Machine Learning, or ML, is a subset of AI where the algorithms are used to learn from past data and predict outcomes on new data or to identify trends from the past. Deep Learning, or DL, is a subset of machine learning that uses neural networks to learn patterns from complex data and make predictions or classifications. And Generative AI, or GenAI, on the other hand, is a specific application of DL focused on creating new content, such as text, images, and audio, by learning the underlying structure of the training data. 06:24 Nikita: AI is often associated with key domains like language, speech, and vision, right? So, could you walk us through some of the specific tasks or applications within each of these areas? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, extracting key phrases, and so on. 06:54 Lois: Ok, I get you. That's like translating text, where you can use a text translation tool, type your text in the box, choose your source and target language, and then click Translate. That would be an example of a text-related AI task. What about generative AI language tasks? Nick: These are generative, which means the output text is generated by the model. Some examples are creating text, like stories or poems, summarizing texts, and answering questions, and so on. 07:25 Nikita: What about speech and vision? Nick: Speech-related AI tasks can be audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech to text conversion, speaker recognition, or voice conversion, and so on. Generative AI tasks are generative, i.e., the output audio is generated by the model (for example, music composition or speech synthesis). Vision-related AI tasks can be image related or generative AI. Image-related AI tasks use an image as the input, and the output depends on the task. Some examples are classifying images or identifying objects in an image. Facial recognition is one of the most popular image-related tasks that's often used for surveillance and tracking people in real time. It's used in a lot of different fields, like security and biometrics, law enforcement, entertainment, and social media. For generative AI tasks, the output image is generated by the model. For example, creating an image from a textual description or generating images of specific style or high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of objects, such as machine, buildings, medications, people and landscapes, and so much more. 08:58 Lois: This is so fascinating. So, now we know what AI is capable of. But Nick, what is AI good at? Nick: AI frees you to focus on creativity and more challenging parts of your work. Now, AI isn't magic. It's just very good at certain tasks. It handles work that's repetitive, time consuming, or too complex for humans, like processing data or spotting patterns in large data sets. AI can take over routine tasks that are essential but monotonous. Examples include entering data into spreadsheets, processing invoices, or even scheduling meetings, freeing up time for more meaningful work. AI can support professionals by extending their abilities. Now, this includes tools like AI-assisted coding for developers, real-time language translation for travelers or global teams, and advanced image analysis to help doctors interpret medical scans much more accurately. 10:00 Nikita: And what would you say is AI's sweet spot? Nick: That would be tasks that are both doable and valuable. A few examples of tasks that are feasible technically and have business value are things like predicting equipment failure. This saves downtime and the loss of business. Call center automation, like the routing of calls to the right person. This saves time and improves customer satisfaction. Document summarization and review. This helps save time for busy professionals. Or inspecting power lines. Now, this task is dangerous. By automating it, it protects human life and saves time. 10:48 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest tech. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 11:30 Nikita: Welcome back! Now one big way AI is helping businesses today is by cutting costs, right? Can you give us some examples of this? Nick: Now, AI can contribute to cost reduction in several key areas. For instance, chatbots are capable of managing up to 50% of customer queries. This significantly reduces the need for manual support, thereby lowering operational costs. AI can streamline workflows, for example, reducing invoice processing time from 10 days to just 1 hour. This leads to substantial savings in both time and resources. In addition to cost savings, AI can also support revenue growth. One way is enabling personalization and upselling. Platforms like Netflix use AI-driven recommendation systems to influence user choices. This not only enhances the user experience, but it also increases the engagement and the subscription revenue. Or unlocking new revenue streams. AI technologies, such as generative video tools and virtual influencers, are creating entirely new avenues for advertising and branded content, expanding business opportunities in emerging markets. 12:50 Lois: Wow, saving money and boosting bottom lines. That's a real win! But Nick, how is AI able to do this? Nick: Now, data is what teaches AI. Just like we learn from experience, so does AI. It learns from good examples, bad examples, and sometimes even the absence of examples. The quality and variety of data shape how smart, accurate, and useful AI becomes. Imagine teaching a kid to recognize animals using only pictures of squirrels that are labeled dogs. That would be very confusing at the dog park. AI works the exact same way, where bad data leads to bad decisions. With the right data, AI can be powerful and accurate. But with poor or biased data, it can become unreliable and even misleading. AI amplifies whatever you feed it. So, give it gourmet data, not data junk food. AI is like a chef. It needs the right ingredients. It needs numbers for predictions, like will this product sell? It needs images for cool tricks like detecting tumors, and text for chatting, or generating excuses for why you'd be late. Variety keeps AI from being a one-trick pony. Examples of the types of data are numbers, or machine learning, for predicting things like the weather. Text or generative AI, where chatbots are used for writing emails or bad poetry. Images, or deep learning, can be used for identifying defective parts in an assembly line, or an audio data type to transcribe a dictation from a doctor to a text. 14:35 Lois: With so much data available, things can get pretty confusing, which is why we have the concept of labeled and unlabeled data. Can you help us understand what that is? Nick: Labeled data are like flashcards, where everything has an answer. Spam filters learned from emails that are already marked as junk, and X-rays are marked either normal or pneumonia. Let's say we're training AI to tell cats from dogs, and we show it a hundred labeled pictures. Cat, dog, cat, dog, etc. Over time, it learns, hmm fluffy and pointy ears? That's probably a cat. And then we test it with new pictures to verify. Unlabeled data is like a mystery box, where AI has to figure it out itself. Social media posts, or product reviews, have no labels. So, AI clusters them by similarity. AI finding trends in unlabeled data is like a kid sorting through LEGOs without instructions. No one tells them which blocks will go together. 15:36 Nikita: With all the data that's being used to train AI, I'm sure there are issues that can crop up too. What are some common problems, Nick? Nick: AI's performance depends heavily on the quality of its data. Poor or biased data leads to unreliable and unfair outcomes. Dirty data includes errors like typos, missing values, or duplicates. For example, an age record as 250, or NA, can confuse the AI. And a variety of data cleaning techniques are available, like missing data can be filled in, or duplicates can be removed. AI can inherit human prejudices if the data is unbalanced. For example, a hiring AI may favor one gender if the past three hires were mostly male. Ensuring diverse and representative data helps promote fairness. Good data is required to train better AI. Data could be messy, and needs to be processed before to train AI. 16:39 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the AI for You course. As you complete the course, you'll find skill checks that you can attempt to solidify your learning. Lois: In our next episode, we'll dive deep into fundamental AI concepts and terminologies. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:05 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.…
Bienvenue sur Lecteur FM!
Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.





























