Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The court's opinion in this case (the CyberSource case) is at http://www.cafc.uscourts.gov/images/stories/opinions-orders/....

Tl;dr: The three-judge appeals court panel affirmed a summary judgment of invalidity. The panel announced a pretty straightforward test as one "filter" that can rule out patentability for certain claims:

1. If a method can be performed by the human mind alone, or with paper and pencil, then it's an "abstract idea," and therefore unpatentable under Supreme Court precedent.

2. If as a practical matter [a] the method cannot be performed with paper and pencil [b], then it passes the filter described in 1 above [c].

3. If a method would be unpatentable under 1 above, then a claim to a storage device encoding computer instructions for performing the method is likewise unpatentable.

NOTES:

[a] In its discussion, the court distinguished a couple of prior cases where such non-paper-and-pencil methods had been held patentable. See pages 20-21 of the PDF. EDIT to respond to petegrif's comment below: One of the earlier methods that had been ruled patentable was "for rendering a halftone image of a digital image by comparing, pixel by pixel, the digital image against a blue noise mask ...." The CyberSource panel distinguished this precedent by saying that, as a practical matter, that earlier method could not be performed by the human mind or with paper and pencil; see page 21 of the PDF.

[b] An unanswered question: How exactly are ordinary people supposed to determine whether a method can or can't be performed with paper and pencil "as a practical matter"?

[c] Even if a method gets past the "paper and pencil" filter, there are still other filters to pass before it will be patentable---but once the Patent and Trademark Office has issued the patent, then a presumption of patentability applies.

COMMENT: This is an area where the net effect of the precedents is not entirely clear. It wouldn't surprise me if the court were to take this case en banc, namely to have all of the court's 12 judges rehear and redecide the case in an attempt to clarify the law. (An en banc opinion is generally regarded as carrying more precedential weight than an opinion by a three-judge panel.)



It's very weird to see human capability factor into it. I mean, a person very well COULD compare two images pixel by pixel, it just takes a rather long time. I'm pretty sure I have done things like that on occasion for very small images, in fact.


Absolutely. A method shouldn't be patentable just because it's too boring for a human to perform a mindless computation over and over again. Multiply a 3x3 matrix is simple for me to do by hand; multiplying a 1000x1000 matrix is no different in process but is much more time consuming; should the latter be patentable? (In my mind the answer is clearly no.)

I can see where there might be some concern that you could claim anything that could be done on a computer could also be done more slowly by a human with a pencil and paper. Frankly, I don't have much of a problem with this: automating simple tasks doesn't seem like an invention, it sounds like a land-grab. Certainly there are nontrivial tasks that a computer can perform but which cannot be done well enough or fast enough by a person, e.g., closing a control loop in software (delay leads to instability; if you can't perform a computation fast enough, you cannot perform the control function, thus, certain such tasks could not be performed by a human with pencil and paper).

It's arguable whether even something like this ("implement an equation... but really fast!") really ought to be patentable, and while I'm a named inventor on at least a few patents in this area, I'm doubtful. Of course, my employer has no such qualms, and is happy to hold said patents.


> closing a control loop in software (delay leads to instability; if you can't perform a computation fast enough, you cannot perform the control function, thus, certain such tasks could not be performed by a human with pencil and paper).

I agree with your point but I find a certain irony in the fact that a pilot is clearly closing a highly delay-sensitive control loop without doing any math whatsoever.


It's not delay sensitive on the same order of magnitude as something like a dc/dc converter. The relevant time constants in the pilot->controls->control surfaces->attitude->pilot loop are measured in fractional seconds (and when they're not, e.g., in fly-by-wire designs, the loops aren't closed by a person).

The fastest relevant time constant for the control loop of a switching power converter with meager 1 kHz closed-loop bandwidth is measured in fractional milliseconds (you care about poles up to, say, 10x your crossover frequency). For a pretty run-of-the-mill analog circuit like an op-amp, the relevant time constants are often measured in nanoseconds. Heck, even something slow like an audio amplifier has relevant time constants in the tens of microsecond range.

Also, don't underestimate the amount of subconscious math you're doing even for a simple task like walking. Your body's subconscious understanding of physics is pretty damn impressive. The problem is that our brains are pretty specific in the types of problems they're adapted to solving. Riding a bike, sure. Induction motor controller, probably not so much :)


The problem is that our brains are pretty specific in the types of problems they're adapted to solving. Riding a bike, sure. Induction motor controller, probably not so much :)

I suspect that it would be possible to map most control problems onto a problem that the human mind can solve, either consciously or instinctively. I don't think the speed of the loop should be a deciding factor at all, at least when a computer is doing the control. For example, you could map maintaining power supply stability to standing upright just by slowing the problem down and converting the input variables into axes of motion.

To sum up, bumping up the clock speed on something a human could do (even if it took them a thousand years) seems like a very poor criterion for patentability, and I hope the issue is revisited and clarified.


The point parent was making is you can't slow the induction controller problem down because it would yield a completely different output due to the instability.

Slowing down the sort of pen and paper calculation the court ruled on does not change the result, it simply yields the same result more slowly.

I still agree this is probably not a sufficient criterion for patentability though. There are plenty open loop systems that intuitively seem like they should be patentable... for example video encoding methods.


"The problem is that our brains are pretty specific in the types of problems they're adapted to solving"

Brain in fact can adapt it's physic coprocesor to solve many problems it was not originaly intended to solve - see extreme sports, arcade games, steering vehicles, playing live music (and imrpovising it).

I think we could perform many task currently thought to be too computation intesive for human, if there was some clever interface, allowing us to use our coprocessor to solve it.

Like somehow translating problem of controlling induction motor into problem of keeping balance on the bicycle :)


I was going to say the same thing kwantam did about piloting: airplanes generally fly themselves. In the vast majority of situations, all the pilot (ideally) does is add slow and small perturbations. Airplanes without inherent stability, like some fighter jets, are without exception flown by computers, with the pilot providing control inputs to the loop.


I had a professor who was a control systems engineer. They had a helicopter simulation hooked up where you could attempt to fly it both with and without the control system. I don't think I was able to move it more than a few feet without having it flip over when I wasn't using the control system.

And if you want to bring math into it, he was also damn good at solving the matrices filled with hairy PDEs that described the whole system.


> without doing any math whatsoever.

The pilot is doing a lot of calculations. It's just not representing sensory inputs and muscular outputs as numbers.


> A method shouldn't be patentable just because it's too boring for a human to perform a mindless computation over and over again.

That's not the effect here, it doesn't ensure patentability. It's excluding the low hanging fruit of tasks that can obviously be done by a human.

Maybe we want to exclude the large class of obvious and simple methods from patentability, but simplicity is hard to decide in a general way for any given algorithm. With this test, we can quickly make that decision for a special subset of those obvious and simple operations, the ones that are so obvious and simple that they can be done by hand.

I think it's a tiny incremental change, maybe that deserves criticism: it doesn't go nearly far enough. But hey, the law grinds fine, and exceedingly slow.


This issue is related to a similar issue in copyright – is the "sweat of your brow" enough to render a protected work? The classic example is the white pages.


* ... multiplying a 1000x1000 matrix is no different in process but is much more time consuming; should the latter be patentable?*

A naive algorithm for multiplying square matrices runs in O(N^4). A slightly less naive algorithm runs in O(N^3). There is an especially clever algorithm that runs in O(N^2.807).

If somebody discovers a practical algorithm that runs in O(N^2.1), should it be patentable? I say yes. No amount of grinding away with pencil and paper is equivalent to a clever divide and conquer algorithm.


I'm not a lawyer, but I'm pretty sure Gottschalk v. Benson makes it clear that the algorithm itself is not patentable.

http://en.wikipedia.org/wiki/Gottschalk_v._Benson

EDIT: It occurs to me upon reflecting on my reply that you're making a statement unqualified by present legal precedent. In that case, I respectfully disagree. Patenting fundamental mathematical truths seems like a very bad idea to me, and I'm glad that nominally such things are not allowed (though of course practically it goes on all the time).

Perhaps another way to say this: the problem most people have with software patents is that it's more or less patenting little pieces of math. I think it's more or less well known how most "practitioners of the art" feel about this.


Sorry. By "patentable" I meant when embodied by a clearly-defined machine.

And anyway, why not patent algorithms in the abstract? There are an infinity of "fundamental mathematical truths", but only a few are spectacularly useful. If a given algorithm truly is obvious and trivial, then you can evade the patent by spending 30 seconds to find another obvious and trivial algorithm. (I note that most of the people complaining about the trivial obviousness of all software patents are not cranking out 50 algorithms a day like RSA or the fast Fourier transform. When they talk about software being obvious, they're talking about other people's software.)


> And anyway, why not patent algorithms in the abstract?

Because the costs outweigh the benefits, unless you're a patent lawyer.

Software patents already create huge legal risks to businesses that did nothing wrong except come up with a simple idea that someone patented first, they're ridiculously expensive to enforce, and most of the money ends up driving more lawsuits rather than more research.


So Carl Friedrich Gauss could have patented his FFT algorithm, since it would take 160 years for the next guys to come up with it on their own?


It's possible for an algorithm to be obvious and trivial, and also the only way to do it.


No amount of grinding away with pencil and paper is equivalent to a clever divide and conquer algorithm.

Unless that grinding is to reproduce the algorithm's steps. Which is a rather important issue in this case (that and people object to the idea of having patent law restrict use of an organized thought process).


A bit offtopic -- what O(n^4) naive algorithm have you in mind? I cannot really think of any -- just implementing the definition gives you O(n^3).


Oops. Thanks for the correction.


Agreed. Even in the case at hand, it seems like the patent holder could argue that a human couldn't perform this task at Web scale.


Yeah, this is patently (ha) absurd. I can't think of an algorithm off the top of my head that isn't feasible to do manually for small values of n.


The question is, will doing them for small values of n solve the problem? Page rank may be more or less useless with a small data set.


It would provide the correct result of the algorithm which should be enough. Useful is outside the scope of patents (AIUI, IANAL).


Actually, utility is one of the core requirements for a patent in the U.S. Whether or not PageRank on a tiny data set is still "useful" would be a ? for the patent office, but if it were deemed not useful, that would be grounds for denial.


I think whole point of patents is they are a registered solution to a problem. So I think usefulness is within their scope. But IANAL either.



The problem I have with this is the following. a) There is no software that we cannot put into hardware - literally! Software and hardware are interchangeable. So the oft cited distinction between s/w and h/w is, to say the least, somewhat arbitrary. b) "If a method can be performed by the human mind alone, or with paper and pencil..." This test sounds as if it might make sense but runs into real problems once you bear in mind issues of computability. E.g. is it therefore patentable if although it could in principle be performed by the human mind with a paper and pencil the degree of computation required is so enormous that such a method is completely impractical. e.g. real time graphics processing. I suspect that the class of such methods is extremely large.


I don't see the problem with A. If you take the software, which can't be patented, and put it into hardware, you can patent the hardware. This seems perfectly reasonable to me, and it holds pretty closely to the original idea of patents: A specialized device that performs an action is an invention, but the act being performed is not an invention. I also don't think that software that emulates the hardware is not in violation of the hardware patent.

I think B is a much bigger problem. Any computer program could be performed with pencil and paper, by getting a big file folder to simulate RAM, indexing it by numbers, and performing assembly instructions by hand with post-it notes as registers. I think the major exceptions are interfacing with other components, including UI systems and networking, and anything with a real-time component.


> Any computer program could be performed with pencil and paper, by getting a big file folder to simulate RAM, indexing it by numbers, and performing assembly instructions by hand with post-it notes as registers

Indeed. But surely if the purpose of the algorithm is something like "find whether this combination exists in a set of millions / billions of records within a tractable time" then it cannot possibly be done by hand in a tractable/practical way -- and it will fail this test?


It's still just an algo. It's still just math.


Yes but when time-sensative information relies on speedy calculations or approach then there is inherent usefulness to the invention. A competing firm may choose to solve the issue by paper in order to avoid lawsuits for stealing the patentable invention but its impractical.


So an otherwise fundamentally unpatentable algorithm could become patentable "when used in a time-sensitive application"?


Or, an otherwise fundamentally unpatentable algorithm with today's core counts could become patentable five to ten years from now when pocket computing power is measured teraflops?


Any computer program could be performed with pencil and paper

This ruling requires that doing it my hand or purely mentally be achievable as a practical matter.

It is not practical for me to draw frame by frame the contents of an AVI file by hand, even if it were theoretically possible, etc. While this ruling does leave judgment calls and grey areas, there are also a lot of cases where it can be decided very directly whether or not it is practical to do the operation by hand.


Not having looked at the avi container format why would you say it's impractical? Certainly a 'normal' sized avi would be impractical but what about a 10x10 grid of pixels? The algorithm is unchanged, why should the size of N matter?


It seems to me like basing patentability on time constraints actually imposes a distinction between hardware and software, regardless of how you feel about their logical interchangeability. For example, an algorithm for real time graphics processing would be just as impractical to run on ENIAC as it would be to do with pencil and paper, but there's no reason you couldn't port the software. So if the validity of the patent is based on its usefulness in a real-time application, it would have to be something like "running algorithm x using computer system y to achieve the desired result in z length of time." Including both the software and hardware used would be the only way to ensure that the time constraint was actually met. I think it's obvious that very few (if any) software patents could meet that criterion.


ianal, but would it not stand to reason that if the algorithmic method exists to incorporate/address computability specifically; then by implication the "human mind/pen&paper" criteria could only pass if performable under the same constraint?

For example, if the purpose of the method specifically relates to real time graphics processing (ie the real time constraint is a fundamental aspect of the method) -- then the expectation would be that the pen&paper test would have to be able to achieve the same constraint (ie calculable in real time) for it to invalidate the claim of the method?


I guess it depends on the granularity of the algorithm?

Imagine a transformation mapping one set of points onto another. If each individual transformation can be calculated with pen&paper, I'd hope the process is not patentable, even if running it a million times in a loop is what makes it useful.


There is no software that we cannot put into hardware - literally!

Indeed, as I have often argued too. And even "virtual" software can readily be implemented in human scale electromechanical devices, as with the relay-based computers where you can watch the memory bits change and even reach in and flip them with your fingers.

"If a method can be performed by the human mind alone, or with paper and pencil..."

However the issue with this patent is that it was too broadly written. The claims apply to all possible implementations, including pencil and paper methods, not just implementation with a logic machine. That is unpatentable.


I think it is a positive step forward and may be useful for the folks who are litigating against some of the more egregious software patents.

That said, I don't believe '1-click ordering' would survive this test, and if not that would be a good thing in my opinion.


If this is upheld, I see all software patents being eventually voided. "as a practical matter" can be attacked by using induction. Do an image transformation on a 10x10 image. Do it again on 11x11. Argue that the patent is invalid by induction. Perhaps someone actually has to spend a week the first time on a large image to "prove" induction.

Real time doesnt exist. It means accomplishing a task within a specified interval. If you can do a task in 1 hour and someone else can do it in 30 minutes. It seems to me that it is then possible to argue that doing the same thing in a microsecond on a computer is not patentable.

Perhaps the judges will need a little focused high school math and a lesson on Turing machines.


Is there any precedent for courts recognizing mathematical or logical proofs? I was under the impression that courts, by their design, rely entirely on appeals to authority in the form of expert testimony.

For that matter, would it be possible to design a court system based on a fully modern understanding of logic, proof, induction, etc., rather than the seemingly ad hoc system we have now?


For better or worse, courts are pretty immune to this particular form of induction. That's why you can copyright the sequential bits of a movie, but not the number "0110".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: