How to import a module in python without using the import keyword

Python has a reasonably straightforward way to work with custom modules by using the builtin statement. However there is a caveat when the module shares the name with a python keyword. o for example…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Synthetic Intelligence

Operations: 2
Wednesday, 27 September 2017

Why Androids Are More Trustworthy Than Humans

Androids have a bad reputation in science fiction. But in science fact, robot-human hybrids can be preferable to both the former and the latter. The trick is balancing the flexibility, intuition, and understanding of a human with the efficiency, reliability, and indefatigability of a robot.

With dozens of human agents recording their experiences over hundreds of task instances, the accumulated understanding starts to resemble a rudimentary brain — something capable of memory and learning. The robot brain directs the human arms, which update the robot brain, which then directs the human arms better — and you have yourself a synthetic assistant far more capable than any individual human or robot.

You want someone who:

How would you find these qualities in a human? You might trust a referral, their relevant experience, or just like the person in question — but they can never meet these promises one hundred percent of the time. Memory is fickle, notes get lost, people get sick, and their judgement is never quite as good as yours. And good luck finding an artificial intelligence that can understand all that!

For a synthetic intelligence (humans plus robots), meeting this criteria is a simple matter of programming.

That’s the problem with human brains — you can’t see the mental programming that determines the person’s actions. You don’t know how they remember things, how they learn from mistakes, or how they pay attention to detail — you just see the results of those thoughts.

Meanwhile, synthetic brains are 100% visible. You can peer into them and rewire however you like. Make sure input A creates output B. Take action Y when situation X occurs. Under no circumstances should you Z. It’s all right there!

In the rest of this post, I’m going to offer you a peek inside the synthetic brains that currently power Invisible. Here are the dashboards our agents use to solve for all of the above.

Even a human can solve this one — just write down every task you get. At Invisible, we ensure agents record their task as part of the task itself.

Example Instances Dashboard

This dashboard allows the clients to confirm at a glance what their assistant is spending time on, which Capabilities (categories) of work they’re prioritizing, and how much time they are spending on each.

Soon, recording this data will happen automatically via time tracking software, but before we automate anything, we always execute manually to ensure we understand all of the relevant pieces first.

A common problem with virtual assistants is context transfer — how can you trust Agent A learns from the work of Agent B? Managing and coordinating humans is a job in itself, as any manager can tell you.

Our Context dashboard stores all of the things an agent needs to know about the client. It doesn’t matter if it’s their birthday, the size of a conference room, or how to sort emails from their spouse. It’s the Single Source of Truth for everything we know about the client.

Example Context Dashboard [Client Information Abstracted]

Note the robotic commands, written in natural language. It’s easy to say: “If we book a flight, then we should note the client as Out of Office.”

But it’s harder to know which of their 3 recurring family events per week we should block time for. Or whether that email from their co-founder should be labeled as urgent or just an FYI.

This is where human intuition comes in handy. The agent can use the Context they have, in tandem with the commands the algorithm gave them, and make a decision that matches the circumstances.

As long as we have the right process instructions written down, the agent will make the right decision. And if we don’t, the client will tell us, and we’ll record it as a new Preference.

Preferences come from client Feedback or Mistakes (both of which have their own dashboards). They’re essentially edits to the synthetic brain’s instructions — do this instead of that. They might add a new step, tweak an existing one, or reiterate something that wasn’t written before.

By storing all of these idiosyncrasies in one place, the client can trust that as long as they input their desires into the dashboard, then their assistant will act the way they want.

Robots are the only ones who can truly promise they’ll never make a mistake. But that also means they can’t innovate or solve problems that aren’t addressed in the initial delegation. Our synthetic assistants can, which means they can promise the next best thing — no mistake made twice.

We track all of our mistakes in the Mistakes dashboard, and every Mistake gets a Preference to match. That way, we can promise we’ll never make the same mistake twice, and stand by it. Every mistake updates the brain so it’s incapable of making it again.

You’ll notice not all mistakes are equal. Some are due to human error, a systems failure, or the cost of trying a new innovation. We categorize those as well, and this informs our product roadmap and agent training procedures.

You can trust your synthetic assistant when we promise it will never make the same mistake twice. Just check its brain!

Home

Add a comment

Related posts:

2nd Democratic Debate Chosen

Tonight the Clinton News Network chose the slots that the people will be Debating for the second time. It was done better than the first one. I missed how they did it, but they will replay it, but it…

Challenging Philosophy of Success

As an Industrial Engineering student at Institut Teknologi Bandung, I'd like to challenge nowadays mainstream university students' philosophy of success with one main thesis on this writing: I will…

So Far Yet To Go

The destination seems so far in the distance. “So Far Yet To Go” is published by Deborah Hart Yemm in The Creative Cafe.