Turquoise International Journal of Educational Research and Social Studies https://theeducationjournals.com/index.php/tijer <div id="focusAndScope"> <p><sup><strong>Turquoise International Journal of Educational Research and Social Studies (TIJER)</strong> is an international, scholarly open access, peer-reviewed and internationally refereed journal focusing on theories, methods and applications in educational research and social studies. </sup></p> <p><sup><strong>Turquoise International Journal of Educational Research and Social Studies (TIJER)</strong> is biannual journal (One volume each year with two numbers).</sup></p> </div> en-US Wed, 18 Mar 2020 00:00:00 +0300 OJS 3.3.0.14 http://blogs.law.harvard.edu/tech/rss 60 Controllability Tradeoffs in High-Dimensional Generative AI https://theeducationjournals.com/index.php/tijer/article/view/273 <p>High-dimensional generative AI models offer exceptional expressive capacity, yet controlling their output <br>behavior remains a central challenge in practical deployment settings. As latent spaces expand in complexity, <br>semantic representations often become non-linear and entangled, making precise directional steering difficult <br>without compromising generative richness. This study examines how prompt conditioning, latent-vector <br>manipulation, and external constraint mechanisms influence model controllability, stability, and expressive <br>diversity. Through iterative generation analysis, workflow-driven integration testing, and robustness evaluation <br>under input perturbations, the results reveal inherent tradeoffs between creativity and predictability. The findings <br>underscore the need for context-aware controllability design, where control intensity is adapted to application <br>domain, user intent, and operational constraints. Such adaptive balancing strategies enable generative models to <br>achieve both expressive variability and reliable task-aligned behavior.</p> Adrian Whitford, Eleanor Markham Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/273 Wed, 15 Apr 2020 00:00:00 +0300 Dynamic Memory Grant Calculation Thresholds in Oracle PGA Allocation Systems https://theeducationjournals.com/index.php/tijer/article/view/276 <p>Efficient Program Global Area (PGA) memory allocation is essential for maintaining stable query execution <br>performance in Oracle database environments, particularly under fluctuating concurrency and mixed workload <br>conditions. Static memory grant thresholds often fail to adapt to real-time load variations, resulting in work <br>area spills, temporary I/O overhead, and degraded response times. This study introduces a dynamic threshold <br>adjustment approach that recalibrates PGA memory grants based on runtime workload behavior and memory <br>pressure signals rather than static configuration or optimizer estimates. Experimental evaluation across <br>analytical, transactional, and mixed workloads demonstrates that dynamic thresholding reduces spill <br>frequency, improves latency stability, enhances throughput fairness among sessions, and accelerates recovery <br>following overload events. The results highlight the role of adaptive memory tuning in sustaining predictable <br>performance in enterprise systems, especially those driven by interactive, user-variable application layers.</p> Amelia Wexford, Daniel Armitage Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/276 Sun, 19 Apr 2020 00:00:00 +0300 Ethical Constraint Encoding in High-Autonomy AI Simulation Scenarios https://theeducationjournals.com/index.php/tijer/article/view/277 <p>High-autonomy AI systems trained in simulation environments must be guided by ethical constraints that <br>influence not only the outcomes they produce but also the internal reasoning processes through which <br>decisions are formed. Traditional constraint strategies based on rule enforcement or reward shaping often fail <br>under complex or adversarial conditions, leading to behavior that superficially meets ethical requirements <br>while violating deeper normative expectations. This study introduces a framework for encoding ethical <br>constraints directly into the representational and policy layers of autonomous agents, combined with dynamic <br>context-based modulation that adjusts ethical priorities according to situational demands. Simulation results <br>across cooperative, competitive, and mixed-motivation environments show that agents with embedded ethical <br>priors exhibit consistent value-aligned behavior, maintain strategic adaptability, and resist exploitation <br>attempts that circumvent rule-based controls. The findings highlight the importance of treating ethical <br>alignment as a structural learning principle rather than a post-hoc regulatory mechanism.</p> Adrian Falkner, Rowan Merrick Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/277 Sun, 26 Apr 2020 00:00:00 +0300 Explainability Fidelity Metrics for Post-Hoc Model Interpretation https://theeducationjournals.com/index.php/tijer/article/view/278 <p>Post-hoc explanation methods are widely used to interpret complex machine learning models, yet the fidelity of <br>these explanations how accurately they reflect the model’s true reasoning remains difficult to assess. Explanations <br>that are easy to understand may oversimplify or distort the decision logic, while highly detailed explanations may <br>be accurate but unusable in practice. This study presents a structured evaluation framework for measuring <br>explainability fidelity through local sensitivity testing, global attribution coherence, representation-space <br>alignment, and causal influence validation. Experimental results show that many commonly used attribution <br>techniques generate persuasive but mechanistically incorrect explanations, particularly in deep models with <br>distributed internal representations. Methods that incorporate causal perturbation and representation-level <br>reasoning exhibit significantly higher fidelity. Additionally, deployment tests in cloud-integrated Oracle APEX <br>environments reveal that explanation stability depends on system execution context, reinforcing that fidelity is <br>both a modeling and operational concern. The findings provide a foundation for selecting and validating post-hoc <br>interpretability techniques in high-stakes enterprise applications.</p> Gregory Ashford, Celeste Rivenhall Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/278 Thu, 30 Apr 2020 00:00:00 +0300