SENTENTIA. European Journal of Humanities and Social SciencesПравильная ссылка на статью:
Philosophical problems of pragmatic theories: genesis and architectonics, II / Философские проблемы прагматических теорий: генезис и архитектоника, 2
Дата направления статьи в редакцию:24-07-2018
Аннотация: Предмет исследования, изложенный в настоящей статье — реальная прагматика, т.е. целенаправленная деятельность субъектов социума (индивидуумов, человеческих ассоциаций). Реальная прагматика в целом, в единстве и взаимодействии природного и гуманитарного миров — предмет прагматических теорий. В практическом ключе реальные задачи анализа и управления большими системами и их системный анализ в единстве естественнонаучной и социо-гуманитарной компонент — реальная необходимость. В статье выясняются требования к информационной базе и архитектонике прагматической теории, необходимые для построения заслуживающей доверия научной теории реальной прагматики. Проводится сравнительный анализ с конструкцией «теоретического знания», предложенной В.С.Степиным. Используется междициплинарный системный подход: методы теории информации, математической логики и системного анализа, предметная интерпретация выводов и анализ адекватности. Сформулированы общие законы и требования к архитектонике компонентов достоверных и содержательных прагматических теорий. В результате детального сравнительного анализа впервые показано, что схема построения основательных и доказательных прагматических теорий полностью совместима с конструкцией (постнеклассического) «теоретического знания» В.С.Степина. Изложены необходимые уточнения и дополнения. Показано, что схема теоретического знания, предложенная В.С.Степиным, имеет своим источником не только физические теории, подробно исследованные им, но и математические теории.
Ключевые слова: критический анализ информации, информационная база, концепты и конструкты, идеальные объекты, базис теории, рационалистические основоположения, догматические основоположения, термы и формулы, концептуальный анализ, архитектоника прагматических теорий
Abstract: The subject of this research is the real pragmatics, in other words, purposeful activity of social subjects (individuals, human associations). Real pragmatics overall, in unity and cohesion of the natural and human realms is the subject of pragmatic theories. In practical terms, the actual tasks for analysis and management of large systems and their systemic analysis in unity of natural scientific and socio-humanitarian component is the real need. The article determines the requirements to information base and architectonics of the pragmatic theory, essential for building a trustworthy scientific theory of real pragmatics. The comparative analysis with the construct of “theoretical knowledge” proposed by V. S. Stepin is conducted. The general laws and requirements to architectonics of veritable and substantive components of pragmatic theories are formulated. The conducted comparative analysis demonstrates that the pattern of structuring the fundamental and conclusive pragmatic theories is fully compatible with the construct of (post-nonclassical) “theoretical knowledge” of V. S. Stepin. The author also notes that the proposed by V. S. Stepin pattern of theoretical knowledge is based not only the comprehensively studied physical theories, but also the mathematical theories.
Keywords:terms and (logic) formulae, dogmatic foundations, rational oundations, basis of a theory, ideal objects, concepts and constructs, information base, critical analysis of information, conceptual analysis, architectonics of pragmatic theories
Philosophic Problems of Pragmatic Theories: Genesis and Architectonics
Follow-up of Part I of the article with the same title
4 Basis of theory; fundamentals
A theory in construction must be based on fundamentals — primal statements of researched objects’ properties (called axioms in mathematics), acknowledged as true. In object theories and research the role of axioms is taken by experience-consistent basic assumptions and presumptions simplifying objects of research in negligible factors. Without doubt, similarly perceived axioms are present in any theory, any research — these are the basics of knowledge in a subject, the statements that serve as a basis for any further reasoning or researcher’s actions (what can be used as substantiation) — even if the researcher in question does not acknowledge it.
Concepts are primal objects; fundamentals are primal statements (laws of a given theory). Fundamentals are not provable — on the contrary, fundamentally unprovable (just as concepts are not definable): determining their verity would require other statements (predicates or propositions) acknowledged as true — that would lead to “infinite [downwards] series” as described by Aristotle:
…but in order to draw universally true inferences one should look to that which really is… The other signifies, ‘can one, when starting with a term which is predicated of another term while no other is predicated of it, proceed downwards along an infinite series’? This enquiry is identical with the question whether demonstrations are illimitable whether everything is capable of demonstration or whether the process must terminate in both directions (Aristotle trans. 1901, p. 47–48)
and such. This understanding has been eventually lost.
But how are fundamentals introduced? In contemplation of scientific cognition in context of “research programs” in (Gaidenko 1993, p.201) P.P.Gaidenko indicates there are two methods of structuring science. One comes from Aristotle — through deduction ofinferences of originally true fundamentals. The other, proposed by G. Galilei and R. Descartes, constitutes confirming hypothetical “basic truths” with consequences (improperly labeled as proof – however, corrected by Newton as “deduc[ing] causes from effects” (Newton 1730, p.344). The progenitor of pragmaticism C. Peirce proposed determining the verity of hypothetical (doctrinal) fundamentals in human activity through their consequences (what is referred to as pragmatic criterion nowadays). It should be noted, this idea had already been proposed by (Kant 1929, p.17).
These problems are discussed in (Stepin 2000, pp.503–504). The link between genesis and functioning of a theory. The constructivity principle, where both variants of reasoning are indicated:
The construction of a theoretical scheme is conducted as interaction between the scientific image of reality (?), mathematical apparatus, empirical and theoretical material generalized in theory. It first implies motion from image of reality to a hypothetical variant of a theoretical scheme, then from it to empirical material. This is the first cycle of theory construction process, essential to proposing a hypothesis. But then the reverse motion occurs, from generalized empirical and theoretical material to theoretical scheme and back to the image of reality. This is the second cycle in proposing a hypothesis.
However, the categorically indicated order of “cycles” is debatable. Historically, the majority of natural-science discoveries occurred through “reverse motion”, moreover, methodological notes of the authors of great discoveries (I. Newton and D.I. Mendeleev) as well as the process of discovering “the Periodic Law” as described by Mendeleev prioritize motion from empiric material (Mendeleev 1953; Kedrov 1958; Vavilov 1989; Zholkov 2015). However, the essential point is not the order but the interconnection of both variants of research thought development.
An essential aspect of theory in construction is understanding of the nature (type) of concepts and fundamentals. This is a matter directly related to the sources of fundamentals comprising the basis (foundation) of a pragmatic theory; they can be divided into 3 categories: empirical (phenomenal and hypothetical), rationalistic (super-phenomenal and pre-phenomenal – see below) and dogmatic (non-scientific).
Precise empirical experience (observation, measurement, experiment) must become the basis of an information base, repeatedly confirmed and non-refuted experience (or idealized experiment) — a base (source) of empirical fundamentals. Traditionally substantiating empirical basis by incomplete induction leads to its conventional interpretation as generalization of isolated experiments: if free multiplication is unambiguous, then it is true. This interpretation is false and leads to other fundamental errors. Bacon’s approach is a case of such an error: proving fundamentals through experience (Bacon, 1889, p.90). Experience (and fact) is not proof of necessity, and cannot be such, isolated facts (as conclusion ad hoc — a random conclusion) cannot guarantee unconditional verity of universal laws – similar attempts of substantiating laws have been rightfully criticized by philosophers since Plato. The problem must be presented differently: unambiguous free multiplication can be considered a base for formulating a principle (scientific hypothesis, axiom) — since there is no evidence to consider otherwise. Thus the repeatedly recited “black swan” metaphor of K.Popper is fundamentally erroneous: just because we have only white swans we do not arrive to the conclusion of black swans’ non-existence — there is just no reason for baseless “speculation on the unknown” until we discover black ones. “Hidden properties have no place in experimental philosophy”, Newton wrote in Principia Mathematica (Newton 1946, p.506–507).
Empirical (incomplete) induction cannot be regarded as a “process of discovering and proving general assumptions” as many philosophers and methodologists still think (see Lipkin 2015, p.91 and so on), regarding it as a proving argument is an error; a disproving example can take considerable amounts of time to be found (more details in Zholkov 2004, p.245). Let’s introduce a notable example. By consecutively substituting whole numbers 1, 2, … into the square binomial 991n 2+1, we won’t get a perfect square no matter how many days or years we would dedicate to these calculations. However, assuming all such numbers would not be perfect squares would be erroneous. The lowest value of n that would result in 991n 2+1 being a perfect square is quite substantial:
n = 12055735790331359447442538767.
Nevertheless, in Opticks Newton correctly states that “although the arguing from Experiments and Observations by Induction be no demonstration of general Conclusions, yet it is the best way of arguing which the Nature of Things admits of” (Newton, 1730, p.380).
Though it is the repeatedly confirmed experience (or idealized experiment) that would become the basis of scientific hypotheses and object theory in construction, empirical objects and fundamentals, despite the conventions of positivism, span far beyond “protocol sentences” as noted by D.I. Mendeleev: “The result of observation and experience in chemistry is not a simple entity as it used to be but an element — this corresponds to idea, not experience”. Eventually it was also noted by (Carnap, 1966, p.227):
Theoretical laws… are laws about such entities as molecules, atoms, electrons, protons, electromagnetic fields and others that cannot be measured in simple, direct ways… The term “molecule” never arises as a result of observations. For this reason, no amount of generalization from observations will ever produce a theory of molecular processes. Such a theory must arise in another way. It is stated not as a generalization of facts but as a hypothesis.
Stepin (2000) as well, in emphasizing the “objective status” of empirical basis, states that there is no pure scientific empiria without the theoretical mixed in.
But even in natural science there is a different sort of empiric fundamentals — ones that can be referred to as “hypothetical”. The bases for those are facts that cannot be subjected to free multiplication or empirical verification. First, these are historical artifacts and descriptions (accounts) of ages long gone. Second, the results of experiments we consider reproductions or processes of distant past (i.e. cosmological theories). Third, the results of hypothetically adequate modeling of physical or human-science processes. These facts become the base for hypothetical information base and creation of appropriate hypothetical fundamentals (hypotheses and abstract principles). They are characteristic, first and foremost, for evolutionary theories, as well as human-science concepts as multiplication is impossible here. Such hypotheses are confirmed by real events matching deductive hypothetical conclusions (given subjects and events of real pragmatics are only partially observable). This means the verification of their verity is based on the pragmatic criterion. Hypothetical humanitarian information also serves as a base for hypotheses.
Facts and empirical fundamentals, phenomenal and hypothetical, become the foundation of the empirical part of a theory.
The other part (of no less importance) of a theory’s basis is comprised of its rationalistic components and fundamentals, split into super-phenomenal and pre-phenomenal. All infinite fundamentals, i.e. axioms linked to infinite objects and infinite procedures are super-phenomenal. As unusual as it might seem, so would be all properties of measurement and analysis tied to real numbers — mathematical formulae included in scientific laws, are super-phenomenal fundamentals: any measurement tool can only display values aliquot to the minimal marking of its scale so it can’t register an irrational value nor determine if two irrational values are equal. This inability is fundamental, not a result of measurement error. The inavoidable error in measurement — and, consecutively, unreliability of formulae only adds to the problem. Mathematical formulae involved in scientific laws are super-phenomenal principles produced by reasoning (based on measurements with accuracy adequate to the current level of measurement tools and science in general, of course). As such, empirism is certainly insufficient as a scientific research doctrine and in that sense is provably wrong.
The clearest examples of super-phenomenal fundamentals are produced by infinite properties of space geometry (parallel axiom, completeness axiom). Another super-phenomenal fundamental of similar importance is countable additivity of probability; stochastic analysis would be impossible without it. As any other infinite property, it cannot be directly verified in a finite amount of time — i.e. we can only verify finite additivity. But then any self-respecting scientist should ask — by what right do we apply theory of probability results to chance-based real world occurrences? This right gives us rational experience – the result of scientific analysis. The problem is, in all stochastic sample spaces practical problems are regarded in (R,Rn ,R∞ and even analytical and Borel spaces), a finitely additive function is countably additive as it has a compact base (Neveu 1965, pp.26–28). It is regrettable that this problem does not get proper representation in textbooks on the theory of chances.
Naturally, there are several examples of “intricate” super-phenomenal spaces and axioms provided by topology and even more so set-theoretic topology, besides, some of these examples find applications in physics and astronomy (and, surprisingly, in economics).
A different sort of rationalistic objects and fundamentals also exists – the pre-phenomenal. Those can appear in development or specification of scientific theories regardless of our intentions. The clearest examples of pre-phenomenal fundamentals are provided by geometry. The compilation of axioms and postulates formed by Euclid missed several essential terms and axioms, partly because those seemed self-evident, partly because those were not yet recognized(Alexandrov 1987, p.256). To present Euclidian geometry a correct theory, it is unavoidable and necessary for them to appear in any possible form of axiomatics. In a way, this means the missing axioms existed before geometers’ research. They can’t be considered going beyond “any possible experience”, but they can go beyond initially acquired experience. Let us denote two more important and prominent examples. The space of continuous functions C [0,1] is complete and enclosed in uniform metrics. But in integral metrics it is incomplete, its completion is the space of Lebesgue integrablefunctions, including discontinuous functions so distant from continuous those would basically retain only formal appearance. Completion of C [0,1] to the space of integrable functions L 2(or L r ) would inevitably cause a Lebesgue integral to appear. If H.Lebesgue did not exist, this integral would be introduced anyway, but would bear a different author’s name.
Such objects also include physical objects and unknown properties predicted by theoreticians, the existence of which had been later confirmed by experience, directly or indirectly (most prominent: Higgs boson, neutrino mass, hypothetically symmetrical (“sterile”) neutrino, axions…). Pre-phenomenal fundamentals are also natural for cosmologic theories of physics and evolutionary theories of biology — for instance, missing links of evolution. Their emergence is caused by theoretical constructions or new experimental results (in accordance to the pragmatic criterion). In pragmatic analysis, construction of a theory (with the full extent of strictness) is not just a necessary way of obtaining veritable conclusions, but means of completion of information to “sufficient information base” (see notes on categoricity in (Zholkov 2015, pp.107–108,122)).
Concepts are introduced through relations, operations and properties given in axioms — there is no other way of introducing them into a theory. “It can be said: on a formal level… a segment is what is referred to as a “segment” by axioms… It is a common saying that axioms can relate to “objects and relations of any nature” as long as the wording applies… In this sense, axioms serve as implicit definitions of basic terms” (Alexandrov 1987,p.114). I.e. axioms replace definitions with describing concepts’ functions as means of identification. H. Poincare is less precise: “The axioms of geometry are… disguised definitions” (Poincaré 1913, p.201; Lipkin 2015, p.100). No, more than definitions! Thus the parallel axiom (Alexandrov 1987, p.114) is not a definition (or partial definition) of a segment — it speaks not of an individual segment, but of mutual positioning of any pair of segments perpendicular to one secant. (Similarly, standard school parallel axiom depicts mutual positioning of parallel lines on a plane and postulates a property of planes (!), not lines). Moreover, in hyperbolic (Lobachevskian) geometry it is not true. Archimedes’ axiom and continuity axiom that describe several properties of segments, or angle laying-off axiom (Alexandrov 1987, pp.29–33) can hardly be considered definitions. A similar differentiation can also be done in other variants of axiomatics. Noted, it’s possible to admit that axiomatics can only unambiguously define concepts if it’s full. In general, in Poincare’s stance, why not consider the following statements (theorems) of geometrical figures’ properties definitions?
According to Poincare, in (Lipkin 2015) such introduction of concepts is referred to as “defined implicitly”. However, the statement that conventional definition is a descriptive definition (through words and previously introduced terms) in a more explicit way than introducing objects through their characteristic properties is debatable. By the way, classification of researched objects through verifying all characteristic properties is a common measure in mathematics. But what’s more important, through inevitable evolutions of real world objects the entity in question will be as we have identified it right until its characteristic properties remain (until a table in a garden rots or a bridge crumbles etc). In my opinion, definitions should only be understood as descriptive definitions; that is how as derivative terms shall be introduced below.
Another important aspect should be noted. Ideal concepts should not be introduced if real ones suffice. This is why in D.A. Alexandrov axiomatics feature segments, not (infinite) lines, unobservable and unreproducible. Alexandrov himself recognizes it and notes that segments have a real prototype — a chalked string and its trace.
Super-empirical analysis is an essential component of both natural-science and human-science theories. The empirical and the rational parts of a basis and their development in theory develop objectively . They represent scientific research of objective world, independent of psychology, habits or researchers’ fallacies. Science and rationalistic approach are not almighty. “The enrichment which art can give us originates in its power to remind us of harmonies beyond the grasp of systematic analysis”, N.Bohr (2010) wrote, however, not only art is beyond scientific analysis, but also certain aspects of pragmatics — worldviews and behavior of people, the dogmatic aspects thereof (p. 79). Interests and underlying motives (including non-material, also referred to as values) are integral properties of humans, they direct the driving forces of human activity and pragmatic field as a whole.
Dogmatic concepts (religion, ethics and so forth) cannot be observed or verified — just as their objects cannot be observed or measured. They are predominantly entities of faith, not ratio. Due to the priceless gift every human has — the free will, one’s dogmas become the foundation of one’s life philosophy and decision-making. Real pragmatics involves multiple motives, non-categorical and lacking objective bases. Spiritual values of different people define priorities and behavioral factors in different ways. Abstract notions outside of ratio become bases for alternative pragmatic theories.
The easier it is to verify phenomenal fundamentals in a model, the less super-phenomenal ones exist, the clearer, more compact and harmonious the fundamentals are organized, the easier it is to achieve results, the more convincing conclusions are. “But to derive two or three general Principles… from Phaenomena, and afterwards tell us how the Properties and Actions of all corporeal Things follow from those manifest Principles”, Newton states in the last “Query” of “Opticks”(Newton, 1730, p.377).The feasibility of this approach is acknowledged by Lorenz and Vollmer — it is included in the postulates of “hypothetical realism” (Vollmer 1998, p.53). Organizing fundamentals for a natural-science theory is harder than a natural-science one. Concepts and fundamentals in a pragmatic theory can have different nature and different truth quality: phenomenal and super-phenomenal (primal and unconditional) or hypothetical, confirmed by matching real consequences and deductive theoretical conclusions (since subjects and events of real pragmatics are only partially observable), or dogmatic — also strongly influential in human behavior. The ability to for a theory’s basis is a remarkable ability of human mind.
5 Technique and content of a theory. Conceptual analysis of real pragmatics
Any pragmatic theory is built from basis by a uniform scheme. Derivative (formed from primal) objects (terms), relations and operations are defined. All derivative terms are definable from concepts. Technical (functional) means of a theory allow acquisition (construction) of transformed or new objects and process and their study, formulating their properties through logical means.
Connections between objects, their transformations, dynamics are defined by functions. In mathematics, derivative functional terms and objects are given through terms in functional symbolics; the logical component of a mathematical theory is developed through deduction — the rules of logical inference, logical formulae that allow using “small truths” for derivation of bigger ones, and complex from simple (Kolmogorov &Dragalin 2004, pp.52–65).
In mathematics, terms are correctly structured texts on functional operations and their properties describing functional properties of a theory. They are structured as an inductional (from simple to complex) procedure. In any object theory these are texts narrating the technologies of given knowledge (activity): formulations and solutions of functional or technical problems. The contents of these texts are defined functional technology or technology found in process of research. Technological problems of an activity (and corresponding theory) are not problems of language, their solutions are problems of technology, not linguistics, and a new technology is most certainly not just a new phrase of language. And presentation or transmission of technology does not need to have a linguistic form. Same applies to mathematics.
Logical formulae are also structured as inductional inference. It is paramount to determine the correct rules of logical inference to exclude the occurrence of false conclusions from true premise. It’s no coincidence that Aristotle was called “the prince of philosophers” in Middle Ages. A long and detailed analysis allowed to find and formalize (fitting with Leibniz’s logical program, first detailed in 1666’s “Dissertation on the Art of Combinatorics”) the essential logical operations and rules of analysis in form of predicate calculus (PC) (Kolmogorov & Dragalin 2004; Vereshchagin & Shen 2008). The process of deriving logical formulae, called the natural deduction method in PC, is roughly equivalent to mathematical proof and even logical inference in natural speech (Kolmogorov &Dragalin 2004, pp.98–100). That’s why logical forms of PC can be regarded as the most reliable forms of logical reasoning and proof in pragmatic theories to resort to for verifying the verity of particularly complex reasoning.
Thus, deductive theory in mathematics develops both through functional construction of objects from concepts (for the empirical part — matching entity constructions from constructs) and logical inference of new statements — as was designed by D.Hilbert. Functional construction (of terms) matches the “genetically-constructive method of theory building” as it was called by V.A.Smirnov (1962). So, the understanding of the “axiomatic method” as described in(Stepin 2000, pp.127–28, 2015, p.46) as a “method that involves taking a certain system of expressions describing a certain area of objects and a system of logical operations with expressions” i.e. just as a chain of logical inference, is incorrect.
Addressing the renowned methodological works of T. Kuhn and I. Lakatos, it is necessary to note that they are focused on researchers and their activities, not requirements of a future theory and its architectonics. This is what explains the numerous studies on psychological problems and stimulations of researchers, unrelated to theory itself. Through analyzing the consecutive process of scientific research, T. Kuhn constructs the cognitive scheme within “paradigms” while I. Lakatos utilizes “research programs” (Kuhn 1977, 1994; Lakatos 1995). According to Quine (1979), scientific method is recognized by “sensory stimuli, a taste for simplicity in some sense, and a taste for old things” (p. 23). According to the currently popular opinion of Lakatos (1970, p.100), “If a research program progressively (i.e. predicting novel facts — “progressive problems shift”) explains more than a rival, it 'supersedes’ it”. This point of view seems justified in evaluation of a researcher’s activity preceding creation of a theory. But as a requirement to theory itself it is insufficient.
Noted, Stepin (2000) expresses the following idea: “In principle it can be said that even in the most advanced research of foundations of science (which include T.Kuhn’s works) western philosophy of science is not analytic enough. It has not determined what the main components of foundations of science are and how they are connected…” (p.188).
Research entities, language and fundamentals of various theories are obviously different, however the laws of logical inference and structure of correct deductive theories (in other words, “rules of all thought, whether it be a priori or empirical, whatever be its origin or its object” as well as “the faculty of reason in general, in respect of all knowledge after which it may strive independently of all experience” (Kant 1929, 9, 18) do not depend on object domain. Correct conclusions, reliable technologies that prevent houses from crumbling, planes from falling, state, industrial and personal relations from coming to catastrophe can only be obtained through argumentum omni denudatum argumento (proof devoid of all decoration) — a condition that is not always fulfilled in sociopolitical research.
The approach described in (Lipkin 2015) denoted as “object-based” indicates a somewhat different (also possible) scheme of constructing physical theories.
Outlining the scheme of constructing scientific theories (pragmatic included) we should note the specificity of pragmatic analysis and identify sequential actions in analysis of real pragmatics (historical process in particular) when “the active subject of cognition is in the studied realm” (Stepin 2000, p.625). Details of pragmatic analysis and practical results of its application are thoroughly studied in (Zholkov 2015).
1. First , critical analysis of information in complete (known) extent and context, with all its contradictions, without withholding or concealing. It is a matter not just of scrupulous gathering of facts (undoubtedly useful) or the requirement of total coverage of the cognition subject. This is a matter of impermissibility of omission or distortion of firmly established facts as it is impermissible in natural sciences — the existence of veritable facts that contradict the proposed conception requires its review as it was done in natural sciences on several occasions. After removing contradictions inside an information base or its contradictions with the proposed conception (or explanation why it’s impossible) the necessary evidential information base is determined — a sufficient base for final conclusions.
2. Second , a precise formalization of problems in complete (known) extent, often serving as the key to their solution, and strict proof-evidentiary standards of inferences (including the necessity of reliable sources) that also determines the sufficient information base for categorical (if such are possible) inferences or ways of finding the missing information.
3. Third , constructivity, i.e. the essential search for motives and precise mechanisms of realizing the designs of all parties involved as well as the account of real balance of main interests and powers(Zholkov 2015, pp.89–91) and personal qualities of power-wielding subjects. Through which pragmatic information interaction reveals itself as a process of decision making and actions of its participants based on the balance of interests and strategic plans, correct or incorrect.
4. Fourth , and this is the goal, the construction of a strict theory based on the balance of influencing factors, objective and subjective, according to their importance. It’s this organization of thorough and proof-based conceptions, complete and free of contradictions, concepts adequate to real data that allow transition from describing pragmatics to its understanding.
Structurally, a verified veritable pragmatic information base provides a base for creation of separate models and components of a researched system (problem) in order to find the solution in question by combining them into a non-contradictory theory (conception) as mathematics (mathematical logic) view it, with the balance of influencing factors accounted for.
Strict requirements to the human-science component to the same degree as natural-science display those requirements of scientific cognition (Stepin 2000) wrote about. A deep pragmatic theory inevitably reflects the interaction of disciplinary and interdisciplinary approach and how they complement each other in harmony, it considers all entities involved: the subject, the means [of interaction], the object, as post-non-classics “should” (Budanov 2015, p.105).
The specificity of non-uniqueness of pragmatic theories (and thus strategic plans and methods of their realization) is not just in their incompleteness, unreliability and contradictions of information, but the presence of directly unverifiable religious, ethical or other abstract principles humans use as a base for decision making.
Pragmatic analysis in context of a unified pragmatic theory (conception) is called conceptual analysis. In this sense this term has been used in (Zholkov 2002). It should be regarded as a method of system analysis of pragmatics in accordance with aforementioned problems and in no other way (conceptual analysis is a trendy term nowadays); its foundations has two cornerstones: 1) full representation and analysis of object information and information interaction; 2) correct formation, system analysis and proof-based conclusions.
Its fruitfulness is determined not by general speculation, but success in solving specific pragmatic problems (social, politico-military, scientific and technological, practical, problems of historical analysis). Let us point out some results. In (Zholkov 2015, pp.362–375) Newton’s “Opticks” (1704) and “Philosophiae Naturalis Principia Mathematica” (1718, 2nd edition) are analyzed; a viewpoint on Newton and Hooke’s debate is substantiated in detail: it is proven that Newton and Hooke’s debate is not a dispute of priorities as it is commonly considered, but a discussion of necessary requirements to scientific theories. In (Zholkov 2013; Zholkov 2015, pp.400–425) a new perspective on Kant’s “mathematical antinomies” and transcendental dialectics is presented: these “antinomies” are not antinomies, the essence of issues is not in the contradictory nature of thesis and antithesis (on the contrary, they’re irrefutable) but in non-exclusivity of conceptual representations. Conceptual analysis of M. Speransky’s “Projects and Memoranda” leading to conclusions: Speransky is not just a “genius of political administration” as it is (justly) thought but also a deep sociopolitical philosopher — which has not been noticed before (Zholkov 2011a; Zholkov 2015, pp.206–226).In (Zholkov 2010; Zholkov 2015, pp.261–362) some fundamental problems of XIX century Russia are analyzed as well as decisions of principal figures and new historical results are acquired in context of conceptual analysis.
The application of principles and methods described in the article allows the solution of a complex academic problem of precise modeling of oil prices (Zholkov 2011b; Zholkov&Korshunov 2012; Zholkov 2016): contrary to the conventional attempts to determine the numerical dependence of prices on significant “price formation factors” (as causes of price shifts), including absolutely subjective and anti-economical (actions of unhinged political “actors” etc.), the basis of analysis is built on another principle formulated by Newton. He proposed to “argue from phenomena without feigning hypotheses, and to deduce causes from effects” (Newton 1730, p.344) — in accordance to that the prices themselves have been analyzed (as a statistical problem) with an assumption that they implicitly include all causes (influential factors), i.e. the price statistics itself is the Newtonian “effect”.
In summary, let us not the following new results provided in the present work.
Introduction of a detailed comparative analysis of a conceptual framework of pragmatic theories in comparison with the paradigm of (post-non-classic) “theoretical knowledge” by V.S. Stepin. It is demonstrated that the proposed construction framework of meaningful and evidence-based pragmatic theories is fully compatible with the framework of theoretical knowledge described in (Stepin 2000, 2015, etc.) and contains the necessary clarifications and additions.
V.S. Stepin proposes to call all objects of scientific theories abstract and “ideal” while singling out two basic object types — empirical and theoretical. However, all the objects of a theory are “theoretical” (deviating from real objects), and in reference to empirical objects it would be natural not to focus on abstraction (estrangement) but the comparison of a theoretical object with its subject-based prototype (reference). The classification of objects introduced in present articles seems to be more clear and substantiated. The comparison of a construction framework of theoretical knowledge in (Stepin: 2000, and 2015) with the framework introduced in this article reveals notable details and properties.
It can be considered that the framework of theoretical knowledge introduced by V.S. Stepin is rooted not just in physical theories he thoroughly explored, by mathematical theories as well.
The explicit formulation of the (philosophical) methodological principle that forms the foundation of a new research method and precise oil & gas market price modeling is also introduced.
The author considers it a pleasant duty to express his gratitude to academician V.S. Stepin, Doctor of Philosophical Science V.I. Arshinov, Doctor of Philosophical Science V.G. Budanov and Doctor of Philosophical Science V.L. Vasyukov for contensive discussion.
The journal allows the author(s) to hold the copyright without restrictions. All authors automatically own full copyright in their work as soon as they create it, and current Russian Federal legislation protects them.
Licence type: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
The journal is an open access journal which means that everybody can read, download, copy, distribute, print, search, or link to the full texts of these articles in accordance with Creative Commons Attribution- NonCommercial 4.0 International License.
You are free to:
Share — copy and redistribute the material in any medium or format.
Adapt — remix, transform, and build upon the material The licensor cannot revoke these freedoms as long as you follow the license terms.
Under the following terms:
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.