Did Craymen agree with the role of the Mediator?

In Panzer Dragoon Saga, Craymen asks for Edge’s help. He says that Edge is “our only hope”. He says that he merely set the stage. He tells Edge: “You must fulfil your own destiny, in your own way.”

This sounds suspiciously like something that was built into the role of the mediator. From the guidebook translation:

I think it’s likely that Craymen knew about how the Destruction Faction programmed the dragon, and knew about the role the mediator.

A further question is, was Craymen promoting the role of the mediator? It certainly seems that he was giving Edge a choice to make up his own mind at the end, rather than attempting to force Edge into the Preservation Faction’s role. Perhaps this was simply because Craymen had given up on controlling the Towers himself, with the Empire surrounding the remainder of his fleet, but hoped that Edge would take up the role of controlling the Towers himself (perhaps under his guidance). However, it seems that Craymen could have easily overpowered Edge in the Tower (since the dragon was located outside of the room), awakened Azel and asked her to control the Tower. Azel probably would have obeyed Craymen at that point since she believed he was trying to save the world. So why didn’t Craymen assume control? Could it be that he ultimately agreed with the Destruction Faction that a youth should be the one to decide humanity’s fate? Or perhaps he saw some quality in Edge specifically that made him believe Edge should be the one to decide?

I think it was only the dragon that really knew. I even wonder if he would have continued his mission without a rider. It seems that the rebels wanted humans to decide their own fate no matter what. When people only do what is most efficient, humans don’t remain useful for long. We’d just keep evolving until we became cyborgs or something. The dragon ensured that people remained in charge of their own technology and destiny.

I think Craymen just knew he had been defeated. It was up to Edge to decide his own fate from that moment onwards and to stop the Empire, and later destroy the Tower (which isn’t what Craymen originally wanted).

Craymen must have seen the darkest side of human nature to make him believe that the Tower was the only solution to the world’s problems. But I imagine that living in a highly efficient empire would put things into perspective very quickly. You actually have to give him a lot of credit for managing to almost destroy the empire with such a small force.

I think the Seekers knew about the mediators as well, considering that the guidebook translations appear to be written from the perspective of the Seekers. Perhaps Lagi revealed this information to Lundi and that is how the Seekers became aware of it.

I think the Preservation Faction’s ideology wasn’t so much about efficiency as it was stopping humans from destroying themselves. The efficiency of the Towers was only a means to that end, rather than end itself.

The technology really took over though. It would have been interesting to see what the Preservation Faction had been like if they had survived. I imagine they have a very unique point of view. Just imagine people smarter than the technology. It’s a scary thought.

It makes me wonder if the majority of people actually believed that being controlled by technology was for the best, but given how lazy people tend to be in the real world, I wouldn’t be surprised.

I guess the Seekers knew to some extent. The fact that dragons were made dependent on a rider to raise their awareness shows that the makers wanted safeguards, but I wonder how much of the Heresy Dragon’s mission was motivated by pure ideology or programming.

True. I was just pointing out that efficiency and technological control are conceptually distinct. You could have an efficient system where people were in control as well, such as the Imperial fleet. Arguably, the Presevation Faction were against that kind of efficiency, the efficiency of war and resource depletion.

Indeed. I imagine their strategies would mirror that of Craymen - ruthless, and seeing individual human persons as less important than the biological human species.

I wonder if, from a conversation perspective, they viewed humans as just another pest to the ecosystem, like how possums are viewed here in New Zealand. Technically, they wouldn’t be wrong.

They’d need a critical mass of followers to become so powerful in the first place. Perhaps by the time people realised that they had centralised too much power, it was too late for them to take that power away from the rulers of the Preservation Faction. They had dragons, after all.

Probably to the extent that the dragon served the will of the mediator. The program was designed to destroy the Towers, that was its mission, but only if the mediator agreed with that choice.

A sequel to Panzer Dragoon Saga where the player can make choices (preserve or destroy) would be an interesting addition to the series. You don’t just follow the path of the Destruction Faction, you choose if that’s that kind of mediator you want to be.

Some would argue that it’s more efficient for a high tech system to live your life for you, and it would be if it was benevolent and if people were too lazy to take charge of their own lives.

It’s just self-destructive, but people are too shortsighted to see that (from my experience).

If there was a system that already knew what was harmful and what wasn’t, and forced you to live by those rules but at the same time gave you everything that you could possibly want or need, people would vote to be slaves. They just wouldn’t see it that way. Perhaps because they don’t know any different, but as reactionary creatures, laziness is used as a weapon against us.

It would just be more efficient to be a slave. I’ve debated with people who think that they shouldn’t fight real tyrants because they’d just end up being murdered. They are just being efficient (cowards). That’s off topic but you get the picture. People will only do what’s most efficient for them almost without exception.

Why would people want to be controlled by technology? To save the planet? In the case of the Towers it seems to go to an extreme. I don’t see people voting for that, but if they have lost control of their leaders and technology then they surrendered their rights already.

[quote=“Solo Wing”]Indeed. I imagine their strategies would mirror that of Craymen - ruthless, and seeing individual human persons as less important than the biological human species.

I wonder if, from a conversation perspective, they viewed humans as just another pest to the ecosystem, like how possums are viewed here in New Zealand. Technically, they wouldn’t be wrong.[/quote]

I think that they would think holistically and see everything in terms of the whole. They’d see everyone as mere replaceable parts in a larger machine. The leaders would see themselves as irreplaceable naturally of course.

That’s just my own interpretation, but real empathy seems to be absent from the Ancients’ worldview.

It would depend on their culture. A lot of people believe that the majority should control the minority. The anti-technology faction seems to be the losing side, and understandably so.

It’s a nice contrast of extremes that makes you see the middle ground.

[quote=“Solo Wing”]Probably to the extent that the dragon served the will of the mediator. The program was designed to destroy the Towers, that was its mission, but only if the mediator agreed with that choice.

A sequel to Panzer Dragoon Saga where the player can make choices (preserve or destroy) would be an interesting addition to the series. You don’t just follow the path of the Destruction Faction, you choose if that’s that kind of mediator you want to be.[/quote]

I’d like to think that the dragon made his own choice too. But that doesn’t seem to be the case. The rebels may have been morally obligated to give the dragon program free will too. They probably couldn’t take that risk though.

[quote=“Geoffrey Duke”]Some would argue that it’s more efficient for a high tech system to live your life for you, and it would be if it was benevolent and if people were too lazy to take charge of their own lives.

It’s just self-destructive, but people are too shortsighted to see that (from my experience).[/quote]

Do you mean a system that takes away the need to work, or a system that takes away the need to think? As I see it, those are quite distinct systems. For example, a computer can take away the need for me to work as hard in writing this post, compared to the pre-Internet era where I would have to manually write and post my writing via snail mail to others. Compare that to, say, a system that just tells me what to write. That might not be a technological system, either, it could come in the form of an ideology that’s controlling what I say and do.

[quote=“Geoffrey Duke”]It would just be more efficient to be a slave. I’ve debated with people who think that they shouldn’t fight real tyrants because they’d just end up being murdered. They are just being efficient (cowards). That’s off topic but you get the picture. People will only do what’s most efficient for them almost without exception.

Why would people want to be controlled by technology? To save the planet? In the case of the Towers it seems to go to an extreme. I don’t see people voting for that, but if they have lost control of their leaders and technology then they surrendered their rights already.[/quote]

I think it would be like in Star Wars, where the people give up control to the Emperor in exchange for security.

[quote=“Geoffrey Duke”]I think that they would think holistically and see everything in terms of the whole. They’d see everyone as mere replaceable parts in a larger machine. The leaders would see themselves as irreplaceable naturally of course.

That’s just my own interpretation, but real empathy seems to be absent from the Ancients’ worldview.[/quote]

Holistic thinking can be quite dangerous if individual parts are ignored.

Panzer Dragoon Orta seems to suggest that the Presevation Faction’s leaders (Abadd’s masters) planned to return, so you’re probably right that they saw themselves as irreplaceable. Their treatment of other humans as the ‘replaceable parts in a larger machine’ as you put it meant that their circle of concern was quite narrow, extending to only a small group of humans. Everyone else would likely be classed as ‘others’, not included in the circle of concern, other than for instrumental reasons. Similar to how nationalists, by their very definition, don’t class those not belonging to the nation as part of their circle of concern. Abadd’s masters would have had some limited empathy, extending to the few individuals whom they decided to protect.

I think the dragon program merely carried out his programming (unlike Azel who went beyond it), whereas the physical dragon appeared to develop his own choices. Panzer Dragoon Orta implies this when he chose to ‘remain’ to protect Orta. The dragon program didn’t appear to understand why his other half would do this; to his programming, existing beyond his specified task was illogical.

Would a mere program talk about “friends” though? It even said it understands its other half’s choice. I think it had free will, at least to some extent. Keep in mind that Azel ultimately completed the task assigned to her as well (after her reprogramming), even though it was her own decision.

I think that human beings can be so lazy that they will only do what is necessary, so if they didn’t need to think or work then they wouldn’t. Generally. There would be a few exceptions like we see in Phantasy Star 2.

So IMO, even if the Ancients were highly intelligent, they might not truly use that intelligence, apart from their leaders. It would depend on how the power structure was set up. The leaders might not want competition and therefore seduce people with paradise or brainwash them into accepting a collectivist state.

Possibly. I think that the majority probably just lost control of their leaders because they never had control in the first place.

[quote=“Solo Wing”]Holistic thinking can be quite dangerous if individual parts are ignored.

Panzer Dragoon Orta seems to suggest that the Presevation Faction’s leaders (Abadd’s masters) planned to return, so you’re probably right that they saw themselves as irreplaceable. Their treatment of other humans as the ‘replaceable parts in a larger machine’ as you put it meant that their circle of concern was quite narrow, extending to only a small group of humans. Everyone else would likely be classed as ‘others’, not included in the circle of concern, other than for instrumental reasons. Similar to how nationalists, by their very definition, don’t class those not belonging to the nation as part of their circle of concern. Abadd’s masters would have had some limited empathy, extending to the few individuals whom they decided to protect.[/quote]

They probably thought that they could just grow new people in a lab or something. They’d make one hell of an enemy if there ever was a new Panzer game, but that would rewrite history too much for my liking.

I think the leaders of the Ancient civilisation were super-geniuses. They had the technology that could make life, so I have no doubt that they could genetically enhance themselves. It’s not like now where we have many specialists working together who could create life perhaps even by accident. But it won’t be long before we can create “perfect” human beings.

IMO, I’d like to believe that the rebels gave the dragon program free will, but they probably didn’t have the time or the opportunity, otherwise they’d be faced with a real moral dilemma: give the dragon program free will and risk it not agreeing with your goals, or save humanity.

It’s possible though. It definitely seemed sentient to me. Maybe it could have been recreated to see from the rebels’ point of view.

It makes me wonder if the dragon wouldn’t have continued its mission without a rider based on its programming or ideological grounds alone.

There are some issues with the concept of free will. But if we’re simply asking if the Heresy Program was autonomous, I think he would have been, within the context of his programming. The Heresy Program would have been designed to make new, unexpected plans, such as when the Sky Rider died and chose Kyle as a new rider. Being born as a mutant coolia possibly wasn’t planned by the Destruction Faction either. It seems that the Heresy Program was flexible and could adapt to new situations as they arose.

The other question is whether the Heresy Program had emotions - I’m on the fence with this one. At the end of Panzer Dragoon Saga, the Heresy Program talks about his duty, and how his mission is complete, that it’s okay for the Divine Visitor to destroy him. He doesn’t seem emotional at all going to his death. Agreeing for the physical dragon to remain wasn’t contrary to his programming, but he likely would have been aware that humans and other sentient beings could have emotions - so perhaps that’s why he understood the physical dragon’s decision? The use of the word “friends” does seem to imply some sort of emotional connection, although perhaps this was just the Heresy Program’s way of talking about allies?

Depending on how we define work, this might be true. Would people submit to the drudgery of most jobs if they didn’t have to? Probably not many. On the other hand, no required work would free up time for more creative projects. So, I’m doubtful that people would just sit around drinking beer all day, every day, if there was no required work. You’d probably find a greater amount of projects that people would want to do. Look at open source development, Wikipedia, etc, and imagine what could be produced if people had more spare time to contribute. That mentality could be applied to the whole society, rather than to just virtual projects as well.

[quote=“Geoffrey Duke”]IMO, I’d like to believe that the rebels gave the dragon program free will, but they probably didn’t have the time or the opportunity, otherwise they’d be faced with a real moral dilemma: give the dragon program free will and risk it not agreeing with your goals, or save humanity.

It’s possible though. It definitely seemed sentient to me. Maybe it could have been recreated to see from the rebels’ point of view.[/quote]

Hmm. Free will wouldn’t be required for the Heresy Program to be sentient, if indeed he was sentient. Non-human animals are generally considered to be unfree, yet we consider them sentient (they have the ability to feel), for example.

I think the mediator was there to make the hard decisions. The Heresy Program seemed to have a more functional role. Would there be any need for the Destruction Faction to make the Heresy Program sentient?

I think continuing its mission without a rider would be impossible, based on the role of mediator. Azel actually tells Edge that it doesn’t work that way when he tells her to take the dragon and find the Seekers if something should happen to him. The dragon loses its purpose without a rider, she says. Azel may have been refering to her connection with Atolm though (whose purpose appears to have been to protect Azel). In the case of the Heresy Dragon, I think he would find another human rider if Edge died.

Maybe you are right, but IMO, necessity is the mother of invention. People would slow down to a crawl. They wouldn’t need to understand the technology they used either, and that can be dangerous, as they later learned. Of course, it was probably just forced on them, but I think that the people of the Ancient Age lost control of their leaders. I doubt they willingly committed suicide for the sake of the planet.

But of course, you never know. :slight_smile:

[quote=“Solo Wing”]Hmm. Free will wouldn’t be required for the Heresy Program to be sentient, if indeed he was sentient. Non-human animals are generally considered to be unfree, yet we consider them sentient (they have the ability to feel), for example.

I think the mediator was there to make the hard decisions. The Heresy Program seemed to have a more functional role. Would there be any need for the Destruction Faction to make the Heresy Program sentient?[/quote]

They might have a moral obligation to give the dragon the potential to have free will, but they’d have to ensure that it was on their side somehow.

IMO, the dragon might have been perfectly capable of completing its mission on its own. It was just programmed and/or ideologically motivated to find a human. The dragon could have even found a drone instead.

The leaders of the Ancient Age would not have been held back by needing humans, and therefore they’d be more efficient. Unless the humans were superhumans of course, but I think they’d still be less efficient than a drone or other bio-weapon.

I personally believe that most people have a way to contribute to a high tech society, but few want to waste their time on them.

Sometimes that’s the case, but if you look at many human inventions they weren’t created out of neccessity. We didn’t need the car or the personal computer. People created them because they wanted to. Whereas many of the creations that are made out of neccessity (or for merely money-related reasons) aren’t all that creative - they just do enough to get the job done, but no more. Bigger, faster versions of what we already have, but not usually creative inventions. The games industry is a good example. Most of the innovation is happening in the indie game space these days. It’s often quite risky to make an indie game. To survive financially, it’s certainly not neccessary, often the opposite. The safer route would be to make a game working at a big publisher that doesn’t innovate but offers greater financial security.

That’s definitely a concern. If the Ancient society promoted passive consumption using the technology rather than learning how it works as well I can imagine the power of knowledge being centralised in the hands of a few. Perhaps they used a non-questioning religion, such as that of Zoah, to control the masses.

We don’t usually consider it an obligation to give computer programs free will. If the Heresy Program didn’t have a desire to be free, perhaps there would be no need to give it the ability to step outside its programming?

[quote=“Geoffrey Duke”]IMO, the dragon might have been perfectly capable of completing its mission on its own. It was just programmed and/or ideologically motivated to find a human. The dragon could have even found a drone instead.

The leaders of the Ancient Age would not have been held back by needing humans, and therefore they’d be more efficient. Unless the humans were superhumans of course, but I think they’d still be less efficient than a drone or other bio-weapon.[/quote]

The translations suggest otherwise though. Take this passage for example:

This strongly implies that the Heresy Dragon would not carry out his mission without a human rider. The Preservation Faction’s dragons would be another matter though since they had drone riders. Potentially the Dark Dragon could carry on without its rider, although the Preservation Faction may have added the drone rider as a failsafe. The Destruction Faction’s dragon seemed to be based on very similar tech, suggesting that they swapped out the role of the drone with a human.

New inventions that worked became popular because someone wanted to make money and they made life easier for a lot of people. I am sure that indie devs would like to make a lot of money from their games, and if they needed the money they’d be motivated much more.

Sure people have to want to do things but they can also need to do things that they want to do. It’s a healthy kind of pressure. But in the case of real life at the moment, there is no choice. When you give people a choice they’d have to find other reasons than necessity. If you take the welfare versus jobs debate, for example, it’s easy to see that a life of dependency is not good for people. It’s better to encourage people to aim higher. If we didn’t live in the western world, there would be no welfare like we have here. But there would be fewer opportunities as well.

People might do nothing with their lives simply because it’s easier and they don’t have to do anything. I think not wanting to do anything is a bigger problem though. People have to want to change. I assume that the Ancients would be different, but they still wouldn’t need to learn how to use their technology unless it was necessary. We’ve lost that motivation. Most of us anyway. Even the current human mind of ordinary people is capable of learning much more if we put our minds to it. But it’s easier, and therefore more efficient to be lazy and let others worry about that for us.

So if we build a utopia let’s make sure that everyone makes their leaders a reflection of them and not vice versa. It was would require the people not to be braindead zombies though. :slight_smile:

You could regard human beings as nothing more than biological machines as well.

I think this is an exception to the rule. If it’s capable of free will then dragon program would have individual rights, and there’s no point surviving if our morality doesn’t survive with us. But it would be an ethical dilemma.

But IMO, the dragon could have carried out its mission without a rider. It just needed a human to ensure that humans were in charge of their own destiny. The rebels could have just given it a drone instead.

Unless the human rebels were as smart as or smarter than drones, which is possible. The current human population could have just forgot their own potential (which is more than possible).

This talk of popularity is a distraction. We weren’t talking about popularity, we were talking about the motivation behind new inventions. Of course many inventions are popular because someone wanted to make money. That something sells well or is well received says nothing about what the motivation behind the original invention was though. It says nothing about the personal story behind the invention. It just means that someone has figured out how to package the invention in a way that makes it popular.

I don’t doubt that indie devs would like to make lots of money. They depend on money to survive and buy things they want. We all do. But this doesn’t explain why they would take the hard route and make their own game instead of taking the easier route and working for EA. If you watch Indie Game: The Movie it confirms that indie devs make huge sacrifices in their lives for something that may not even bring in enough money to survive on.

If the desire for money was the key variable, then the rational choice would be to work for EA. The key variable here is not money though. It is the desire to invent. The desire to make money for these indie devs is secondary once the minimum resources required to survive are met.

If you mean psychological needs, then I would agree. Inventors may have the psychological need to make their mark or whatever. These psychological needs are a type of desire. If that desire is strong enough, it’s the only desire they need, so long as they have the resources to survive.

However, just the need to survive, I don’t see any evidence that that type of pressure creates or promotes a desire to invent. It just creates a desire to get a job - any old job - in order to survive. The most rational thing to do would be to work somewhere that makes a lot of money, such as a bank, rather than doing something creative.


Earlier I wrote there was a distinction between (a) a system that takes away the need to work and (b) a system that takes away the need to think.

I hope I’ve communicated clearly why having system (a) would not stifle innovation - quite the contrary. With plentiful resources, more time to think and create the inventions, and without the pressures to do a mundane, non-creative job in order to survive, there would be more time and energy available to work on creative projects. By more energy, I mean mental or physical energy that wouldn’t be sapped by working a 40-hour week.

If (a) happened, I don’t think (b) would necessarily follow. With no work, I think people would get bored of not thinking, and so seek out tasks that intellectually stimulated them. This wouldn’t mean that everyone would need to learn how all the technology worked. You’d have people splitting off into their different interest groups. Those who wanted to learn how the technology worked could do so, but it wouldn’t be for everyone. So long as their was a critical mass of people who understood it, and the power was distributed rather than centralised (like the Internet).

In any case, if people decided to not to think it wouldn’t have anything to do with being out of work, since most jobs divert people’s attention away from thinking about what really matters.

[quote=“Geoffrey Duke”]You could regard human beings as nothing more than biological machines as well.

I think this is an exception to the rule. If it’s capable of free will then dragon program would have individual rights, and there’s no point surviving if our morality doesn’t survive with us. But it would be an ethical dilemma.[/quote]

I’m confused. Weren’t we talking about if the Destruction Faction ought to have given the Heresy Program free will, not whether it is the case that it had free will? Perhaps you mean if it’s sentient, then it would have rights?

It isn’t clear what you mean here either. You say that it needed a human and could have carried out its mission without a rider. But that’s a contradiction. It can’t need and not need a human rider to complete its mission at the same time.

To answer the basic question, I don’t think Craymen necessarily “agreed”… but as Geoffrey touched on, Craymen had become rather philosophical about everything, by necessity. If the Empire was truly built by a splinter faction of Seekers, then as a “Lord” in the Imperial Academy Craymen probably had access to a great deal of the same research and lore that the Seekers’ outlook was based on. The thing is, I believe a great deal of that was also flawed and inconclusive. And it has always seemed very consistent to me, how much the Seekers’ own agenda colors their… propaganda?

I’ll again note that phrasing: “it is said”, and the “surmising” of the conditions for the mediator. This whole subject is explicitly NOT expressed as factual.

Whereas we have two other characterizations from within the game, that are far more matter of fact, and from much more direct sources: Azel, and the Dragon / Heresy program itself. Azel gives the most irrefutable explanation for the general nature of dragons, and the bond they make with their riders. Of course she does not fully understand how Edge’s dragon is different, but she would still KNOW more than anyone else, again generally.

And once again, Lundi’s own accounts directly contradict that strict literal characterization of the mediator conditions. He knew NOTHING about the larger stakes involved, and describes the feeling of being compelled by the dragon’s own purpose. And when the dragon itself tells us that its ultimate purpose is for the Divine Visitor, where does that then leave the role of the mediator? As I’ve said before, I simply do not believe it is something intended to be accepted so literally and simply - it illuminates, but also distorts, like so much of the rest of the material.

I believe Craymen was a profoundly disillusioned individual, and a product of his culture as well. That he could have contempt for the Empire’s violence, while being so ruthless himself, illustrates the essential conflict in his character. At that point, I think his vendetta against the Empire itself would have probably outweighed all other concerns regardless, and as it is also said: the enemy of my enemy is my friend

Edge had already become the Empire’s enemy, and he wanted to protect Azel. Any other motives can seem inconsequentially moot.

Things were also invented and became popular simply because we needed them. Necessity creates a demand for something.

If people don’t need to learn or to strive for something, they generally won’t, but there will always be exceptions to the rule.

The Soviet Union is the perfect example of this. People are hugely motivated by the profit motive, and the need for money is created by the necessity to trade. Money is nothing more than a refined bartering system that actually works.

The ring of power is hard to resist you know. :slight_smile: EA bought BioWare when BioWare were aiming to be independent. Sometimes you get an offer that you can’t refuse. When money buys everything, few can resist.

The only thing that will help the games industry now is if games were cheaper to make, and the only way that will happen is if there’s a lot of chaos and a lot of competition where competitors are willing to invest more money for less profits.

Again, necessity will make that happen. Eventually.

There will always be exceptions to the rule.

[quote=“Solo Wing”]If you mean psychological needs, then I would agree. Inventors may have the psychological need to make their mark or whatever. These psychological needs are a type of desire. If that desire is strong enough, it’s the only desire they need, so long as they have the resources to survive.

However, just the need to survive, I don’t see any evidence that that type of pressure creates or promotes a desire to invent. It just creates a desire to get a job - any old job - in order to survive. The most rational thing to do would be to work somewhere that makes a lot of money, such as a bank, rather than doing something creative.[/quote]

You’d have to do both and unfortunately making money isn’t a choice.

All I am saying is, if you have a particular talent, in a world where you need to learn how to survive, you will need to use it and profit from it somehow. So necessity basically forced it out of you. You would probably do it anyway, but if it wasn’t necessary then you’d be less motivated.

You can want to do something because you need to do it because it’s necessary to get what you want.

Human beings are reactionary creatures so when we stand still we die unless we are very disciplined or dreamers.

[quote=“Solo Wing”]Earlier I wrote there was a distinction between (a) a system that takes away the need to work and (b) a system that takes away the need to think.

I hope I’ve communicated clearly why having system (a) would not stifle innovation - quite the contrary. With plentiful resources, more time to think and create the inventions, and without the pressures to do a mundane, non-creative job in order to survive, there would be more time and energy available to work on creative projects. By more energy, I mean mental or physical energy that wouldn’t be sapped by working a 40-hour week.

If (a) happened, I don’t think (b) would necessarily follow. With no work, I think people would get bored of not thinking, and so seek out tasks that intellectually stimulated them. This wouldn’t mean that everyone would need to learn how all the technology worked. You’d have people splitting off into their different interest groups. Those who wanted to learn how the technology worked could do so, but it wouldn’t be for everyone. So long as their was a critical mass of people who understood it, and the power was distributed rather than centralised (like the Internet).

In any case, if people decided to not to think it wouldn’t have anything to do with being out of work, since most jobs divert people’s attention away from thinking about what really matters.[/quote]

We will have to agree to disagree. I agree that it would give people more time, but IMO they’d be less motivated to learn anything because it’s simply not necessary.

But this is all assuming that people remain the way they are. If you take west Rome for example, it fell because it basically relied on imports too much, stopped being self-sufficient, taxed people to death and literally trained its own enemies. It was brought down by too much infighting and eventually they could not recover. They simply got lazy.

All the warning signs were there but they chose to be lazy. Just like today in our world of endless credit.

I’m just saying that because the anti-technology faction believed in free will and that people shouldn’t be controlled by technology, they might be morally obligated to extend the dragon program the same rights if it was capable of free will (which I assume it was).

If it wasn’t then it’s a different story.

From what I gather it only needed a human because that was the way it was programmed. It was meant to give humans control of their own destiny.

But the dragon could have continued its mission with a drone or alone, leaving human beings out of the equation entirely. I’m just saying that using a human is inefficient unless the rebels were really smart, and even then a drone might do a better job unless humans were somehow better at doing the job.

It seems like an ideological choice rather than simply doing what’s most efficient, which might explain why the rebels lost. Even humans with technology can’t compete with pure technology. That’s probably why the Technology faction gained so much power and support in the first place. People would see themselves as redundant especially without technology.

So this really comes down to a debate about what it means to be human. Do we let technology overtake us, which it undoubtedly will, or do we slow down and make sure that we remain in charge of our own technology somehow even if that means slowing down a bit? I vote for the latter.

A highly disciplined power hungry culture will do strange things to people.

I guess the one positive thing about not needing to work is that it would make greed unnecessary. A culture based on greed without safety nets also does strange things to people. >:)

Lundi’s account of being compelled by the dragon was written before the dragon revealed his mission, but after it seems things became much clearer to him. Consider the following passage from the Old Diary:

Although I agree with your comment that much of what the Seekers wrote is filtered through their own proaganda, and that the exact conditions of the mediator are uncertain, Lundi’s connection with the dragon is a strong reason to believe that the role of mediator itself isn’t something that the Seekers just made up. The guidebook translation says that:

Which fits very closely with what Lundi wrote about how the dragon shared ‘the secret of his existence’ with him. His existence being that of a program designed by the Destruction faction, but whose purpose was to enable a human to make the decision put forth in the original goal of the program. This human could be the Divine Visitor and it wouldn’t contradict the basic nature of the dragon program and the mediator that very likely would have been revealed to Lundi.

This is just theory of course, but the mediator could be seen as the Divine Visitor acting through Edge. And perhaps through the other dragon riders too. In a way, we divine visitors are Edge over the course of Panzer Dragoon Saga. The roles of the Divine Visitor and the riders perhaps aren’t as seperate as we’ve come to generally think.

Interestingly, Futatsugi mentioned the Divine Visitor in one of the 1up interviews, saying that you (the Divine Visitor) are the main character, Edge is your avatar, and Azel is who the story is really about:

There’s not much seperation of Edge and the Divine Visitor’s role here, although of course I’m not going to attempt to read the creator’s mind. :anjou_happy:

Perhaps we have no way to entirely agree on that criteria Solo, but perhaps I just can’t see any ultimate distinction between reading the creators’ minds… and simply reading the creation? :anjou_love:

I agree the Divine Visitor could just as well be considered “a human” and “the mediator” for this context. But that becomes a cyclical point, the Seekers were confused about what the Divine visitor was, and that would make them just as confused about the mediator in those terms. The theme is there, it is recurring and somewhat central; as always I also see a trap in conflating the literal technicalities and those overarching themes.

In essence you have characterized the mediator as almost a metaphor, with that theory. Which is largely in line with what I’m saying as well, I think?