https://theeducationjournals.com/index.php/tijer/issue/feedTurquoise International Journal of Educational Research and Social Studies2026-01-22T12:27:34+03:00Open Journal Systems<div id="focusAndScope"> <p><sup><strong>Turquoise International Journal of Educational Research and Social Studies (TIJER)</strong> is an international, scholarly open access, peer-reviewed and internationally refereed journal focusing on theories, methods and applications in educational research and social studies. </sup></p> <p><sup><strong>Turquoise International Journal of Educational Research and Social Studies (TIJER)</strong> is biannual journal (One volume each year with two numbers).</sup></p> </div>https://theeducationjournals.com/index.php/tijer/article/view/273Controllability Tradeoffs in High-Dimensional Generative AI2026-01-22T12:15:24+03:00Adrian Whitford, Eleanor Markhamadmin@gmail.com<p>High-dimensional generative AI models offer exceptional expressive capacity, yet controlling their output <br>behavior remains a central challenge in practical deployment settings. As latent spaces expand in complexity, <br>semantic representations often become non-linear and entangled, making precise directional steering difficult <br>without compromising generative richness. This study examines how prompt conditioning, latent-vector <br>manipulation, and external constraint mechanisms influence model controllability, stability, and expressive <br>diversity. Through iterative generation analysis, workflow-driven integration testing, and robustness evaluation <br>under input perturbations, the results reveal inherent tradeoffs between creativity and predictability. The findings <br>underscore the need for context-aware controllability design, where control intensity is adapted to application <br>domain, user intent, and operational constraints. Such adaptive balancing strategies enable generative models to <br>achieve both expressive variability and reliable task-aligned behavior.</p>2020-04-15T00:00:00+03:00Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/276Dynamic Memory Grant Calculation Thresholds in Oracle PGA Allocation Systems 2026-01-22T12:23:07+03:00Amelia Wexford, Daniel Armitageadmin@gmail.com<p>Efficient Program Global Area (PGA) memory allocation is essential for maintaining stable query execution <br>performance in Oracle database environments, particularly under fluctuating concurrency and mixed workload <br>conditions. Static memory grant thresholds often fail to adapt to real-time load variations, resulting in work <br>area spills, temporary I/O overhead, and degraded response times. This study introduces a dynamic threshold <br>adjustment approach that recalibrates PGA memory grants based on runtime workload behavior and memory <br>pressure signals rather than static configuration or optimizer estimates. Experimental evaluation across <br>analytical, transactional, and mixed workloads demonstrates that dynamic thresholding reduces spill <br>frequency, improves latency stability, enhances throughput fairness among sessions, and accelerates recovery <br>following overload events. The results highlight the role of adaptive memory tuning in sustaining predictable <br>performance in enterprise systems, especially those driven by interactive, user-variable application layers.</p>2020-04-19T00:00:00+03:00Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/277Ethical Constraint Encoding in High-Autonomy AI Simulation Scenarios2026-01-22T12:25:21+03:00Adrian Falkner, Rowan Merrickadmin@gmail.com<p>High-autonomy AI systems trained in simulation environments must be guided by ethical constraints that <br>influence not only the outcomes they produce but also the internal reasoning processes through which <br>decisions are formed. Traditional constraint strategies based on rule enforcement or reward shaping often fail <br>under complex or adversarial conditions, leading to behavior that superficially meets ethical requirements <br>while violating deeper normative expectations. This study introduces a framework for encoding ethical <br>constraints directly into the representational and policy layers of autonomous agents, combined with dynamic <br>context-based modulation that adjusts ethical priorities according to situational demands. Simulation results <br>across cooperative, competitive, and mixed-motivation environments show that agents with embedded ethical <br>priors exhibit consistent value-aligned behavior, maintain strategic adaptability, and resist exploitation <br>attempts that circumvent rule-based controls. The findings highlight the importance of treating ethical <br>alignment as a structural learning principle rather than a post-hoc regulatory mechanism.</p>2020-04-26T00:00:00+03:00Copyright (c) 2020 https://theeducationjournals.com/index.php/tijer/article/view/278Explainability Fidelity Metrics for Post-Hoc Model Interpretation 2026-01-22T12:27:34+03:00Gregory Ashford, Celeste Rivenhalladmin@gmail.com<p>Post-hoc explanation methods are widely used to interpret complex machine learning models, yet the fidelity of <br>these explanations how accurately they reflect the model’s true reasoning remains difficult to assess. Explanations <br>that are easy to understand may oversimplify or distort the decision logic, while highly detailed explanations may <br>be accurate but unusable in practice. This study presents a structured evaluation framework for measuring <br>explainability fidelity through local sensitivity testing, global attribution coherence, representation-space <br>alignment, and causal influence validation. Experimental results show that many commonly used attribution <br>techniques generate persuasive but mechanistically incorrect explanations, particularly in deep models with <br>distributed internal representations. Methods that incorporate causal perturbation and representation-level <br>reasoning exhibit significantly higher fidelity. Additionally, deployment tests in cloud-integrated Oracle APEX <br>environments reveal that explanation stability depends on system execution context, reinforcing that fidelity is <br>both a modeling and operational concern. The findings provide a foundation for selecting and validating post-hoc <br>interpretability techniques in high-stakes enterprise applications.</p>2020-04-30T00:00:00+03:00Copyright (c) 2020