This site uses cookies to improve your experience. To help us insure we adhere to various privacy regulations, please select your country/region of residence. If you do not select a country, we will assume you are from the United States. Select your Cookie Settings or view our Privacy Policy and Terms of Use.
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Used for the proper function of the website
Used for monitoring website traffic and interactions
Cookie Settings
Cookies and similar technologies are used on this website for proper function of the website, for tracking performance analytics and for marketing purposes. We and some of our third-party providers may use cookie data for various purposes. Please review the cookie settings below and choose your preference.
Strictly Necessary: Used for the proper function of the website
Performance/Analytics: Used for monitoring website traffic and interactions
Greg Lambert 2:49 I hear not all of that was HyperDraft. And, you know, are you seeing that, that there’s some that kind of have some definite concrete use cases? But the most important thing that I think is hard for many organizations to hear is you must put in the work to get something out of it, right? So that’s good.
AI is getting proficient at assisting various legal services such as contract management, legal predictions, eDiscovery, and sooner or later courtroom hearings. AI applications are getting to the roots of legal tech and redefining the definition of the legal profession. This does not stop here.
In today’s episode, we’ll be diving into the fascinating world of one of the most advanced machinelearning tools out there: ChatGPT. Professor Hoofnagle] 03:03 ChatGPT is the newest iteration of a machinelearning technology that can generate text. I’m your host, Eric Ahern.
The panelists included, Danielle Benecke, who is the founder and Global Head of machinelearning at Baker McKenzie, so large law firms are hiring people to lead up machinelearning within our law firms. So there was definitely a lot of interest in it. We’d love to hear from you.
Marlene Gebauer 9:21 So Jordan, or Oren, or both of you, and I’m very interested in hearing the answer to this this question because, you know, just we’ve heard so much about these certifications in the past. So I definitely think data. So I think it’s going to be in cyber privacy, in machinelearning everything.
This protects the researcher from the AI “creating” the answer from all the non-relevant information it has collected in its large language model of machinelearning. They do AI machinelearning proofs of concepts for governments and large companies and so on. But I think we’re definitely and you were right.
Yeah, Paulina Grnarova 22:41 I was gonna say, We’ve definitely seen some some great examples of people playing around with the models, which helps them kind of understand the limitations of the models and the things that you can do with the models. Is it in AI or machinelearning or both that? Very cool to see. I believe it?
And as we see improvements in algorithms, machinelearning algorithms, the cost of predicting legal outcomes is going to essentially vanish, it’s going to become very clear what would happen in court with respect to a particular situation in terms of the legal outcome. Greg Lambert 54:23 Yeah, we’ll definitelydefinitely link that out.
The Proposed Rules are now subject to a comment period ending on the day of DCWP’s public hearing, which is October 24, 2022. The Proposed Rules appear to narrow this definition in two ways. DCWP will then publish the final rules at some point thereafter, hopefully before the AEDT Law comes into effect.
We’d love to hear your thoughts on what value you see in ChatGPT and GPT 3.5 Like they’re just these massive machines that folks can’t really wrangle, there are entire new startups built around. Machinelearning transparency, trying to give humans a way to view the models and get a bit of a better understanding of it.
And obviously, now we’re looking to expand the team more and more, I think we’ve looked into hiring, you know, ml ops people, machinelearning engineers, software engineers, and it has produced already a tremendous amount of value for the firm. So you know, if you’re a legal geek, like, like I am, this is definitely a must read.
The guidance applies to all types of artificial intelligence and machinelearning and is divided into four key topics: (i) secure design; (ii) secure development; (iii) secure deployment; and (iv) secure operation and maintenance. UK and U.S.
Greg Lambert 2:49 I hear not all of that was HyperDraft. And, you know, are you seeing that, that there’s some that kind of have some definite concrete use cases? But the most important thing that I think is hard for many organizations to hear is you must put in the work to get something out of it, right? So that’s good.
The panelists included, Danielle Benecke, who is the founder and Global Head of machinelearning at Baker McKenzie, so large law firms are hiring people to lead up machinelearning within our law firms. So there was definitely a lot of interest in it. We’d love to hear from you.
Yeah, Paulina Grnarova 22:41 I was gonna say, We’ve definitely seen some some great examples of people playing around with the models, which helps them kind of understand the limitations of the models and the things that you can do with the models. Is it in AI or machinelearning or both that? Very cool to see. I believe it?
Marlene Gebauer 9:21 So Jordan, or Oren, or both of you, and I’m very interested in hearing the answer to this this question because, you know, just we’ve heard so much about these certifications in the past. So I definitely think data. So I think it’s going to be in cyber privacy, in machinelearning everything.
And as we see improvements in algorithms, machinelearning algorithms, the cost of predicting legal outcomes is going to essentially vanish, it’s going to become very clear what would happen in court with respect to a particular situation in terms of the legal outcome. Greg Lambert 54:23 Yeah, we’ll definitelydefinitely link that out.
And obviously, now we’re looking to expand the team more and more, I think we’ve looked into hiring, you know, ml ops people, machinelearning engineers, software engineers, and it has produced already a tremendous amount of value for the firm. So you know, if you’re a legal geek, like, like I am, this is definitely a must read.
To recap, the DCWP released an initial set of proposed rules on September 23, 2022, and held a public hearing on November 4, 2022. Due to the high volume of comments expressing concern over the Law’s lack of clarity, the DCWP issued a revised set of proposed rules on December 23, 2022, and held a second public hearing on January 23, 2023.
The term “automated employment decision tools” is broadly defined as any “computational process, derived from machinelearning, statistical modeling, data analytics, or artificial intelligence” that “issues a simplified output.” Council review with a public hearing scheduled for September 2022.
Next, we plan to expand the product’s scope to cover more aspects of the litigation process, to improve the machinelearning summarization model, and to develop visualizations of evidence based on the data present in the chronology. Finally, we plan to build integrations with e-discovery and practice management products.
There was no definitive list of “all the books containing official court decisions.” If you talk to lots of legal tech startups, like I do, you’ll hear how much easier it is to start something new because of the project. When the deal closed, we were ready to go. So we did research and made one. We didn’t have all the books we needed.
Because as we go into the world of AI, machineslearn like humans do, and they’re designed, and they bring in experiences and knowledge from the creators from the underlying data sets. And at the time, the that role was about help the machineslearn how to talk to humans.
AI-assisted discrimination “Machinelearning is like money laundering for bias.” – Maciej Cegłowski [7] Employers can use AI to assist with a host of tasks. This law distinguishes “cannabis oil” from other types of medicinal marijuana and has specific definitions of what is and is not protected. For instance, Virginia.
So I did a bit of cleanup on them, as best I could, some of the voices in order to kind of remove enough of the background to hear them may sound a little off to the listeners. Like for example, my company deal with definitely, there are very few publicly trained model. Greg Lambert 21:42 Yeah, that one was a little harder to hear.
Because as we go into the world of AI, machineslearn like humans do, and they’re designed, and they bring in experiences and knowledge from the creators from the underlying data sets. And at the time, the that role was about help the machineslearn how to talk to humans.
There was no definitive list of “all the books containing official court decisions.” If you talk to lots of legal tech startups, like I do, you’ll hear how much easier it is to start something new because of the project. When the deal closed, we were ready to go. So we did research and made one. We didn’t have all the books we needed.
Next, we plan to expand the product’s scope to cover more aspects of the litigation process, to improve the machinelearning summarization model, and to develop visualizations of evidence based on the data present in the chronology. Finally, we plan to build integrations with e-discovery and practice management products.
There are many different ways that one can approach this problem, both from a technical approach different techniques, and machinelearning techniques that one can can use. Because I can hear my, my IT security guy screaming right now. Greg Lambert 27:00 Well, that’s interesting, because I feel like definitely welcome.
So I did a bit of cleanup on them, as best I could, some of the voices in order to kind of remove enough of the background to hear them may sound a little off to the listeners. Like for example, my company deal with definitely, there are very few publicly trained model. Greg Lambert 21:42 Yeah, that one was a little harder to hear.
There are many different ways that one can approach this problem, both from a technical approach different techniques, and machinelearning techniques that one can can use. Because I can hear my, my IT security guy screaming right now. Greg Lambert 27:00 Well, that’s interesting, because I feel like definitely welcome.
And the deep comes from deep learning, which is a form of machinelearning. And you can’t really get into their psychology too much hear around deep fakes, but then there’s others. We might see things we might hear things that aren’t what they seem to be. We’d love to hear from you. Isha Marathe 2:31 Yep.
And the deep comes from deep learning, which is a form of machinelearning. And you can’t really get into their psychology too much hear around deep fakes, but then there’s others. We might see things we might hear things that aren’t what they seem to be. We’d love to hear from you. Isha Marathe 2:31 Yep.
We organize all of the trending information in your field so you don't have to. Join 5,000+ users and stay up to date on the latest articles your peers are reading.
You know about us, now we want to get to know you!
Let's personalize your content
Let's get even more personalized
We recognize your account from another site in our network, please click 'Send Email' below to continue with verifying your account and setting a password.
Let's personalize your content